Introduction
On 9 December 2023, the EU Council and Parliament reached a ‘provisional’ agreement on the AI Act (the “Act”). However, further work was needed on a technical level (and on the more controversial parts of the deal, particularly regarding foundation models) to iron out the finer details into an agreed and final text. The agreed text will then be formally adopted by both Parliament and Council and become EU law, which is expected to take place in the second half of 2024. Just before the publication of this Insight, a draft of the final text was made available online by Euractiv Tech Editor, Luca Bertuzzi, and it has been said that the Committee of Permanent Representatives will meet on 2 February 2024 to vote on adopting the text, albeit France is reputed to be seeking to delay this. We will be reviewing this draft and will publish a separate Insight on the main points and provisions of note in the text shortly.
Separately, on 5 October 2023 the European Commission published a draft of what are known as ‘standard contractual clauses’ for the procurement of artificial intelligence (“AI”) systems (the “AI SCCs”) and with it, took another brave step into the murky waters of AI regulation by providing guidance for public bodies with specific references to some of (then draft) provisions in the Act. It is important to flag, these clauses are not mandatory and are only recommended for us use by public bodies rather than private entities. There are, however, some practical guidance that both public and private bodies should be aware of, and which will be discussed further below.
Who are they designed for/Key takeaways
The clauses are designed to support public organisations with the necessary drafting to protect their interests, whether they are procuring or supplying AI solutions, while aligning with the core principals of the Act such as trustworthiness, transparency and safety. The clauses do not deal with matters not covered by Act (such as interaction with the GDPR) and will require further legal input to adapt them to each specific contract. Key takeaways of the AI SCCs are highlighted below:
- High-Risk vs. Non-High Risk AI Systems
The clauses are broken out into two categories depending on where they fall within the Act. The ‘high-risk’ clauses are intended to cover AI systems that are classified as ‘high-risk’ within the meaning of the Act, e.g. AI deployed in medical devices or in the management of critical infrastructure. The ‘non-high risk’ clauses were originally intended to cover the procurement of AI systems labelled as ‘limited’ or ‘minimal’ risk, such as the procurement of foundation models i.e., models that can be adapted to a variety of tasks, and general-purpose AI solutions such as ChatGPT. Both sets of clauses are designed to cover areas such as data governance, conformity assessments and transparency requirements.
However, one of the key takeaways of the recent ‘provisional agreement’ on the Act (and as was reflected in the leaked draft) was the introduction of added transparency and compliance obligations on general purpose AI and foundation models. The best way for businesses who intend on procuring, general purpose AI systems is to look towards the kind of language used in the full version of the “high-risk” clauses published by the EU as a starting point to sufficiently protect their interests whilst aligning with the added obligations under the Act.
That said, it is questionable what benefit attaches to the ‘non-high risk’ clauses. The drafters note, correctly, that application of the requirements in the ‘non-high risk’ AI SCCs to ‘non-high risk’ AI systems is not mandatory under the AI Act, but “recommended to improve trustworthiness of AI applications procured by public organisations”. That may be – but why would a commercial provider of these non-high risk systems ever agree voluntarily to these stringent requirements?
- Analysis - Finished Article or Starting Point?
Below are some examples of clauses taken from both the ‘high-risk’ and ‘non-high risk’ categories:
(a) High-Risk: Article 6. Transparency of the AI system:
“The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning”.
(b) Non-High Risk: Article 2. Risk management system:
“The Supplier ensures that, prior to the Delivery of the AI System, a risk management system shall be established and implemented in relation to the AI System”.
These clauses are a clear attempt to simplify the complex supply chain in AI procurement, however, this simplification may not accurately reflect the complexity of the AI systems themselves and may lead to misinterpretation on what can be deemed “sufficiently transparent” or a suitable “risk management system”. Furthermore, the clauses are not intended to cover a specific contractual arrangement. They are a useful starting point for public organisations and may be integrated into any commercial agreement for AI procurement but will need to be modified to meet each specific contractual context. These AI SCCs are by no means the finished article (pardon the pun) – there are a number of issues that means they cannot, and should not, be simply incorporated into an agreement (like the AI SCCs for international data transfers under the GDPR). For instance, in addressing only the Act, other legal regimes likely to be relevant in many scenarios, such as medical device regulation and data protection. And in being limited to the Act only, other relevant considerations for parties to a contract for AI services, such as intellectual property rights (albeit this area is touched on in a somewhat uncomprehensive manner), liability, etc., are ignored. That said, they may well be a useful starting point for parties looking to incorporate terms into commercial agreements that reflect the requirements of the Act (at least to some degree), and with careful advice and assistance from legal and technical experts, could form the basis for an agreement that is ultimately both balanced and Act-compliant.
World Economic Forum ("WEF") guidelines on the responsible procurement of AI solutions
The WEF has usefully provided its input on the matter and published guidelines for the responsible procurement of AI solutions. These guidelines are relevant for both public and private sectors and serve as a benchmark for the procurement of AI. The main takeaways of the guidelines are to:
- assess potential AI solutions’ ethical standards and their regulatory compliance (which is currently a shot in the dark);
- align these solutions with your business and commercial objectives. Balancing the business need to procure AI with their current AI capabilities, is an important consideration or public and private bodies. It is important to manage the uncertainty of procuring AI systems with the potential commercial benefit of investing time and money into them; and
- evaluate their potential impact both socially and ethically. Companies can take a proactive step here by formulating a risk assessment early in the procurement process and set up an internal AI governance team to prepare for audits on compliance requirements under the Act in advance.
Concluding remarks
While the AI SCCs provide us with a structured approach to contracting for AI systems, they are subject to certain, significant limitations in the way they are currently drafted, including in their assumptions about roles in the AI supply chain and in not addressing some specific legal and commercial matters. At this point in time, they should be viewed as very much a ‘work in progress’ as there is still ongoing modifications and peer-to-peer input still flowing in from policy makers in the EU and industry leaders. Read in conjunction with the WEF guidelines on responsible procurement of AI solutions however, both private and public organisations have a useful starting point in putting pen to paper on what will eventually, likely, become a common form of standard AI contract terms.