How the EU’s rules on AI models will ripple down the value chain
The recently adopted AI rulebook includes regulations for providers of general-purpose AI models including OpenAI, Anthropic and Mistral AI. While the scope of these duties is defined in the legal text, there's a possibility for broader interpretation in the future due to a last-minute addition to the law's chapeau — a non-binding section meant to clarify the applicability of the rules.
13 June 2024
By Luca Bertuzzi - Senior AI Correspondent for MLex
The EU has recently adopted a comprehensive rulebook for artificial intelligence, which includes rules for the providers of general-purpose AI models like OpenAI, Anthropic and Mistral.
However, the scope of these duties is set to be much broader. The potential for scope creep is embedded in the law’s chapeau, which is not part of the legal text but is meant to clarify its applicability concerning a last-minute provision regarding the rules on general-purpose AI models.
GPAI models can be used to develop AI applications for various tasks, such as OpenAI’s household chatbot ChatGPT, which runs on the GPAI model GPT-4.
Users can adapt models to adjust them to their specific needs. For instance, leading French startup Mistral AI provides a fine-tuning platform to allow users to customize and optimize their open-source and commercial models.
Scope creep
“In the case of a modification or fine-tuning of a model, the obligations for providers of GPAI models should be limited to that modification or fine-tuning,” reads the law’s preamble.
In other words, anyone who alters a GPAI model will fall under the scope of the AI Act’s obligations proportionally to the level of modification. This provision entails a significant extension of the scope of these duties.
European companies were reassured throughout the negotiations that this chapter would apply only to GPAI model providers like OpenAI, Mistral or Google’s DeepMind. By contrast, this provision, introduced in the final technical meetings following the political agreement, extends the obligations down the value chain.
The EU policymakers intended to ensure that the value chain actors would be covered proportionally to their role and size. Still, the result might be a complex patchwork for assessing one’s responsibilities under the AI law.
“Mapping and navigating the who’s who and what’s what when it comes to the AI ecosystem, and lifecycle will be important tasks for organizations now building their compliance approach and programs,” Joe Jones, director of research for the International Association of Privacy Professionals, told MLex.
Duties for downstream players
The text specifies that these obligations would be limited to the modification that was carried out, making an example of complementing the technical documentation provided by the original model provider with information on the modifications.
However, technical documentation is perhaps the most straightforward duty mandated by the AI rulebook for GPAI model providers. All duties, including copyright-related ones, could fall on the shoulders of any downstream players who change the model.
It is also not clear what will happen if the downstream economic operator modifies a model that is deemed to pose a "systemic risk" under the AI Act. This category falls under a stricter regime in terms of model evaluation and risk mitigation.
If that is the case, there is no clarity as to whether the downstream players have to repeat the red-teaming exercise or put a risk-mitigation system in place even to alter the weight of the model’s parameters.
GPAI model providers like Google and OpenAI will be able to demonstrate their compliance by adhering to recognized codes of practice and technical standards. The downstream operators might also do the same, but there is no sign they will be involved in developing these compliance tools.
Enforcement and market trends
It is unlikely the European Commission’s AI Office will have the capacity to monitor the compliance of a potentially massive constellation of model fine-tuners. Instead, the EU body will probably focus on the top model providers, and only investigate the entire value chain if something goes wrong.
Alternatively, the EU body might scrutinize samples of fine-tuned models and rely on the scientific community to flag up potential societal risks.
Market trends also have a major effect on how this provision will be ultimately applied since the most successful AI companies have been closing up their models, which means the influence of downstream players is limited.
Similarly, it remains to be seen how many of the most powerful models — those the AI Act considers entail a "systemic risk" — will be open to fine-tuning.
A likely scenario is that, whenever fine-tuning an AI model, the downstream player may turn it into an application with a specific purpose.
For Barry Scannell, a partner at law firm William Fry, one possible interpretation is that fine-tuning can only significantly shift the duties of downstream players if it causes the intended purpose of the application based on that model to become risky for people’s health and fundamental rights, which entails a strict regulatory regime.
The policymakers admit that the AI Act was negotiated in a rush. In particular, the regime for AI models was hastily developed in the final weeks of the negotiations.
That means the AI Office will probably have to review the situation in a few years’ time, to see where the market is going. Still, other similar unexpected compliance burdens might be lying in wait.
An EU official involved in drafting the text told MLex that there are "many ‘Easter eggs’ hidden in the text."
MLex
Prepare for tomorrow's regulatory change, today.
Guest articles from MLex: The specialist news and analysis you need to stay ahead of fast-moving regulatory shifts in the UK and across the globe.