Introduction
The EU Artificial Intelligence Act 2024 (“AI Act”) officially entered into force on 2 August 2024 and signified a pivotal step in regulating the use of AI across the EU. The provisions of the AI Act will be binding and directly applicable across all EU states on a phased basis between 2 February 2025 and 31 December 2030. It is a significant piece of legislation extending to 144 pages, divided into 113 Articles spread across 13 Chapters, and 10 Annexes.
Timeline for application:
- Chapters I and II - 2 February 2025
- Chapter III (section 4) – Chapters V, VII, IX (section 78), XII (not Article 101) - 2 August 2025
- Chapter III (Article 6(1)) – 2 August 2027
- Everything else - 2 August 2026
While all aspects are to be implemented by 2 August 2027, certain obligations imposed on operators of high-risk AI systems and of AI systems that form part of large-scale IT systems will come into effect over various dates between 2 August 2026 and 31 December 2030.
What must you prepare for first?
On 2 February 2025, Article 4 will come into effect. Titled: AI Literacy, it requires “Providers” and “Deployers” of AI systems to take measures to ensure, to the best of their ability, a sufficient level of AI literacy within their staff and personnel dealing with the operation and use of AI systems on their behalf.
Who will be regarded as a ‘provider’ and ‘deployer’ is defined under Article 3. In essence a provider is an entity that develops an AI system, which itself can be broadly interpreted, that they place on the market or put into service. Whether that entity obtains payment for the AI system or not is of no relevance to this definition. A deployer on the other hand is an entity that uses an AI system, unless the use is in the course of a personal non-professional activity.
While Article 4 imposes targets on organisations to ensure compliance, the Article also delivers an opportunity to deliver efficiencies and a competitive edge. Compliance with the AI Act will help organisations harness AI’s full potential by gaining a better understanding of AI.
Who must receive AI Literacy?
Article 4 states that providers and deployers must take measures to ensure that their, “staff and other persons dealing with the operation and use of AI systems on their behalf” have a sufficient level of AI Literacy. This means that organisations not only have to take measures to ensure AI Literacy amongst their staff, but also organisations must consider establishing policies to cover any persons using AI systems on their behalf. Although Article 4 may present itself as a burden to organisations, it is unquestionably worthwhile as those that invest in AI Literacy will be in a stronger position to leverage the full potential of AI and of course, avoid receiving potential penalties for non-compliance with the AI Act.
What does a “sufficient level of AI Literacy” mean?
Article 3 define “AI Literacy” as “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”.
Secondly, a “sufficient level of AI Literacy” depends on the individual persons technical knowledge, experience, education and training and the context the AI systems are to be used in and finally whom the AI systems are to be used on. Evidently, a one-size fits all approach is not a solution for organisations, as personnel have a range of technical abilities and duties in relation to AI. For that reason, the following measures are advised to organisations to help build AI Literacy to comply with Article 4.
1. Assess current AI Literacy levels
Prior to diving into providing training, it is recommended that organisations complete an extensive assessment of their employee’s current AI Literacy levels. Effective methods of assessment can include the use of diagnostic tools, internal surveys, and interviews. These assessments will help tailor the training to ensure a targeted approach is provided, rather than a generalised approach which may not focus on specific needs of employees.
2. Adopt a holistic approach
This means creating a program which covers AI’s ethical, business and societal implications. A holistic approach to AI Literacy is broadly encouraged as employees would gain vital knowledge not only regarding how AI works, but also about how AI can support business goals and the importance of the fair and transparent use of AI.
3. Establish internal policies and procedures
Along with providing training, organisations should also consider implementing internal policies and procedures outlining the mandatory legal standards. This will provide employees with a clear understanding of the responsible use of AI.
4. Leverage external resources
The European AI Office recently published the First Draft of the General-Purpose AI Code of Practice. The Code will face three further rounds of drafting before it is anticipated to be published on 1 May 2025. (The First Draft of the Code is available to download here). In addition, the European Artificial Intelligence Board, which is a key advisory body that includes representatives from each EU member, will support the European Commission in promoting AI Literacy. Collaboration with industry and regulatory bodies is recommended to support AI Literacy.
5. Ongoing education
Continuous learning and monitoring progress is crucial in maintaining and developing AI Literacy. Records of AI training programmes should be maintained to provide to regulatory bodies upon request.
Enforcement
Article 99, which forms part of Chapter XII makes provision for each member state laying down rules on penalties and other enforcement measures applicable to infringements by operators. Those penalties must be effective, proportionate and dissuasive. Chapter XII won’t come into effect until 2 August 2025. While there are currently no definitive provisions for failing to take measures to ensure AI Literacy, fines of up to €7,500,000 or 1% of the offender’s total annual turnover for the preceding financial year, whichever is higher, could be imposed for supplying incorrect, incomplete or misleading information to notified bodies. Organisations must keep this in mind when providing authorities with information on the level of AI Literacy amongst staff.
Key takeaways
Ultimately, the aim of implementing AI Literacy should not be to avoid penalties, but to establish and develop the trustworthy use of AI. There is little doubt that AI will play a pivotal role in business development in the future. For this reason, organisations should move quicky to establish and develop AI Literacy.