16 12 2024 Insights Artificial Intelligence

AI journey toward compliance: First draft of the General-Purpose AI Code of Practice published

Reading time: 3 mins

Digital Transformation resized

Introduction

On 24 January 2024 the European Commission established the European Artificial Intelligence Office. To be known as the AI office, it is responsible for, amongst others, implementing and enforcing AI regulations, promoting trustworthy AI, fostering international cooperation and applying sanctions.

Article 56 of the AI Act imposes an obligation on the AI Office to facilitate the drawing up of codes of practice to assist with its proper implementation. On 14 November 2024 the first draft of the General-Purpose AI Code of Practice (“the Code”) was published. Prepared by independent experts, appointed as Chairs and Vice-Chairs of four working groups, it is envisaged that the Code will undergo three further drafting rounds before concluding in April 2025. While not binding, Recital 117 of the AI Act provides that the Codes should represent a central tool for compliance with the obligations set out under the AI Act. It is no surprise therefore that the first draft has sparked great interest as it signals a significant first step forward in the implementation of the AI Act.

What is a General-Purpose AI System?

A General-Purpose AI system can be best explained by firstly outlining the difference between an AI system and an AI Model. While related, these are distinct concepts in the field of AI. An AI model is not defined in the AI Act but has been described by IBM as a program that has been trained on a set of data to recognise certain patterns or make certain decisions without further human intervention. AI models apply different algorithms to relevant data inputs to achieve the tasks, or outputs, they have been programmed for. Microsoft illustrates the difference by reference to a radiology scan. In particular they set out that a radiology scan of the patient’s chest might be shown to an AI Model to predict whether a patient has COVID-19. An AI system, by contrast, would evaluate a broader range of information about the patient, beyond the COVID-19 prediction, to inform a clinical decision and treatment plan.

What will be regarded as an AI system is defined under Article 3 of the AI Act and can be generally described as an AI model that is trained with a large amount of data using self-supervision at scale.

What is considered a “General-Purpose AI model” and “General-Purpose AI system” is defined by the AI Act. This in regard, a General-Purpose AI System is defined as an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes both for direct use as well as for integration in other AI systems.

What is the first draft of the Code?

The aim of the Code is to support General-Purpose AI Providers in meeting their obligations under the AI act, which are more particularly set out in Articles 53 and 55. In our insight; To AI Literacy and Beyond, the Final Frontier opens on 2 February 2025, we explore the difference between “Providers” and “Deployers”. Given that the Code is aimed at Providers, it might be helpful to point out again however that a Provider is an entity that develops an AI system, that they place on the market or put into service. Regardless of whether you are a Provider or Deployer of General-Purpose AI systems (“GPAI”) however, it will be important be aware of the Code in terms of understanding your rights, as well as obligations.

The Chairs and Vice-Chairs have stressed that the first draft of the Code merely sets a foundation for the further drafting rounds. The final draft will be published on 1 May 2025, in time for enforcement from 2 August 2025.

The AI Office intends to establish a “future-proof” Code which is flexible enough to apply for the next generation of models. The drafting of the Code was guided by the principles of the AI Act, taking into account proposals from industry, academia and civil society.

Core principles guiding the Code

The Code is structured around six core principles to ensure that the Code is aligned with EU legal standards and values. The core principles are as follows.

  1. Alignment with EU Principles and Values
  2. Alignment with the AI Act and international approaches
  3. Proportionality to risks
  4. Future-proof
  5. Proportionality to the size of the general-purpose AI model provider
  6. Support and growth of the AI safety ecosystem

Rules for Providers of General-Purpose AI Models

Transparency

The Code submits that GPAI Providers keep up-to-date comprehensive documentation relating to the intended use, design and deployment of their model as well as detailing acceptable use policies and potential risks. This documentation should be made available to the AI Office and downstream Providers to advance public transparency.

Copyright

The Code presents GPAI Providers with guidance on how to comply with their obligations under Article 53(1)(c) of the AI Act in relation to copyright laws. This includes advice on Text and Data Mining (TDM), policies for upstream and downstream copyright compliance, and guidance on the use of crawlers that follow the Robot Exclusion Protocol.

Taxonomy of risks

Article 55(1) of the AI Act imposes additional responsibilities on providers of GPAI models identified as having “systematic risks”. The Code sets forth guidance in relation to what category of risks may meet this definition, including offensive cybersecurity risks, risks enabling chemical, biological, radiological and nuclear weapons attacks, loss of control of GPAI models, the automated use of AI for Research and Development, persuasion and manipulation issues, and large scale-discrimination. It is recommended that GPAI Providers assess these risks regularly and implement appropriate mitigation strategies.

GPAI Models with systematic risks

Once GPAI Models with systematic risks are identified the Code provides a comprehensive framework to proactively mitigate those risks. The Code proposes that a Safety and Security Framework (SFF) should be established. The Code further outlines technical and governance risk mitigation policies for Providers of GPAI with systematic risks.

Compliance with the GPAI Code

Although compliance with the Code is not compulsory, it is likely to be significant in demonstrating compliance with the AI Act in relation to GPAI models when it comes into effect on 2 August 2025. It is important to note that the Code is intended only to provide guidance in relation to meeting the AI Act’s obligations. Therefore, GPAI Providers may choose to drift away from the Code if they believe that they can illustrate compliance in an alternative way.

Next steps

Approximately 430 responses were submitted to the first draft of the Code. The Chair and Vice-Chairs acknowledge that the first draft of the Code does not contain the granularity expected of the final draft, for example it has not yet been addressed how the Code will be updated and reviewed.

The various “Open Questions” contained in the first draft of the Code were specifically designed to obtain feedback from a diverse range of stakeholders to help shape the substance of the specific measures involved. The drafters welcomed feedback until 28 November 2024. On 10 December 2024 the AI Board convened to discuss the progress of the Code. It is anticipated that the EU AI Office will publish the second draft of the Code during the third week of December 2024.

Key takeaways

The key point to note here is the Commission’s commitment to implement the AI Act. While there is significant work to be done before the final draft is published on 1 May 2025, the first draft illustrates a significant step forward in balancing the need to regulate the use of AI while also promoting AI innovation.

AUTHOR: Ricky Kelly, Partner | Sarah Lucey

SHARE
Stay loop bg
Sign up

Stay in the loop

Sign up to our newsletter