EU AI Act: Top 10 Changes in the Latest Draft

At the end of last week, two European Parliament Committees published the latest version of the EU AI Act. The new draft reflects months of political wrangling, but it also demonstrates that EU legislators have listened to the (many) criticisms levied at the EU AI Act until now. So what’s new?

We’ve set our top ten changes:

  1. Higher penalties: If you thought the previous proposal was eye-watering, it’s about to get steeper. The most serious breaches will now be subject to fines of up to EUR 40 million or 7% of global annual turnover (this was previously EUR 30 million or 6% of global annual turnover).
  1. New (and better) definition of AI: The previous definition of an AI system was strongly criticised for capturing too broad a suite of software applications. The EU have now aligned with the much tighter OECD definition i.e. “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” This means there’s a stronger focus on machine learning and deep learning networks.
  1. Finally, there’s rights and recourse for EU consumers(!): One of the most powerful criticisms of the previous version of the AI Act was that in reality, it had a huge blind spot – it set out no rights, no recourse and effectively no role for individuals. EU legislators have paid attention, and placed a new focus on these “affected persons”. They now have rights to lodge complaints with supervisory authorities, a right to explanation of decision-making from deployers of high-risk systems, and there’s potential for representative actions. About time too!
  1. Foundation models are now in-scope (including generative AI): EU legislators have spent months wringing their hands about how to regulate foundation models i.e. AI models trained on broad data at scale, designed for generality of output, and may be adapted to a wide range of tasks. They’ve finally settled on an approach – these won’t be regulated as high-risk AI systems, but they are subject to stricter requirements around data governance, technical documentation and quality management systems. For generative AI (e.g. ChatGPT and Dall-E), you’ll need to notify individuals they are interacting with an AI system, and make a summary of the training data available that’s protected under copyright law.
  1. Changes to the list of high-risk AI systems: All categories of high-risk systems listed in Annex III have been subject to changes, including around HR, education, law enforcement and biometric systems. Legislators have now included AI recommender systems used by social media platforms that are designated as very large online platforms under the Digital Services Act EU 2022/2065. Importantly, any system listed in Annex III will only be considered high-risk if it meets a new hurdle and poses a “significant risk to the health, safety or fundamental rights of natural persons”.
  1. Changes to the list of prohibited systems: Prohibited AI systems now include AI systems that create or expand facial recognition databases through untargeted scraping, and emotion recognition systems in the areas of law enforcement, border management, in the workplace and education institutions. The inclusion of emotion recognition systems is particularly welcome – they’re widely acknowledged to be discriminatory (facial expressions and their perceived meaning can be deeply cultural and context-specific) and without scientific basis. The use of ‘real-time’ remote biometric identification systems is banned in all publicly available spaces (there were previously some wide exceptions, now removed).
  1. A new focus on the environment and climate change: A disturbing fact – according to research in 2019, training a single AI model may emit the equivalent of more than 284 tonnes of carbon dioxide (nearly five times as much as the entire lifetime of the average car in the US, including its manufacture). The latest draft takes these concerns seriously, and there’s a new focus on reducing energy consumption and increasing energy efficiency (tied to various record-keeping and documentation requirements).
  1. More obligations for deployers (and everyone else in the supply chain too): Deployers are the parties that deploy the system under their own authority. Deployers now face various enhanced compliance obligations. This includes a requirement to conduct a fundamental rights impact assessment (comparable to a data protection impact assessment) and providing certain information to individuals subject to a decision by a high-risk AI system. Importers, distributors, authorised representatives and providers also face enhanced compliance obligations. In the previous draft, “deployers” were (confusingly) called “users” (many assumed the concept of “users” was intended to capture “affected persons”, which it is not – so this is another welcome development).
  1. A shorter transition period: Once the AI Act passes, there will be a grace period before it applies. The latest draft proposes a period of two years, whereas the previous draft set out three years. There was already serious concern that three years will not be enough time to build up the infrastructure necessary for compliance across stakeholders (regulators, notified bodies, and industry). Two years may be a stretch too far. The proposal for two years is likely to go down like a lead balloon – expect this to be a hot topic for negotiations.
  1. General principles for AI: There’s a new set of six general principles applicable to all AI systems: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; and (6) social and environmental well-being. These should be a first port of call for organisations designing ethical AI principles as a governance tool, and all operators are expected to make their best efforts to comply with these.

So what’s next?

European Parliament are scheduled to vote on the draft next month in June, and following this, the trilogue will finally get underway – this is the three-way discussion between the European Parliament, Commission and Council. If this all goes smoothly (a big ‘if’), the AI Act could be formally adopted as early as the beginning of 2024. At this point, the transition period will begin.

Are you building your AI Act compliance programme? Then get in touch to find out how we can help.

Author

Jaspreet is a Senior Associate, and advises clients on complex issues at the intersection of healthcare, data and technology. Her practice has a particular focus on accessing and using patient data, innovative collaborations with hospitals, and the use and regulation of AI in the healthcare space.