The EU AI Act was adopted by the European Parliament today and is expected to enter into force within a few months, with its first substantive provisions taking effect before the end of 2024.

The EU AI Act applies across the AI lifecycle – from developers to deployers of AI technologies – and organisations across industries have been watching its progress closely. Now that it is finally approved, we set out below what’s next, and the key practical steps organisations should be taking now to incorporate EU AI Act compliance into their AI strategy and governance. The Act applies not only to organisations within the EU but also those based outside it who deploy AI in any EU Member State, so many multi-national companies will be caught by its provisions.

The Act takes a risk-based approach to the application of AI to different types of use, with categorisation of unacceptable (and therefore banned uses) such as facial recognition in public places, to high risk uses, such as employment uses, which are subject to conformity assessments and risk management, to limited risk, such as chatbots, and minimal risk uses, where information requirements and voluntary codes of conduct apply respectively. General purpose AI models, ie the most advanced AI systems being produced by the big tech companies, are also regulated by the Act.

What’s next?

The final text will be prepared for official publication, and will enter into force 20 days afterwards. Obligations under the Act will come into force in stages, reflecting the risk-based categorisation of AI systems:

  • 6 months after entry into force – likely late 2024 – the ban on prohibited systems will take effect. This covers a limited set of use cases, deemed as posing an unacceptable risk to fundamental rights, which are prohibited and must be phased out. Prohibited use cases include use of subliminal techniques, systems that exploit a person’s vulnerabilities, biometric categorisation, social scoring, individual predictive policing, certain uses of facial or emotion recognition, and ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except in certain limited circumstances.
  • 12 months after entry into force – likely mid-2025 – obligations for general purpose AI (GPAI) governance become applicable.  GPAI systems do not need to go through a pre-market conformity assessment, and the obligations imposed on them are generally less onerous than for high risk systems. However, they are subject to requirements in relation to technical documentation, policies to comply with copyright law, making available a “sufficiently detailed” summary of the content of the training dataset, and labelling AI-generated or manipulated content. GPAI systems deemed to present “systemic risk” are subject to additional requirements.
  • 24 months after entry into force – likely late 2026 – the AI Act becomes generally applicable. This includes the full weight of obligations applicable to most high-risk AI systems (except those discussed below) including pre-market conformity assessment, quality and risk management systems, and post-marketing monitoring. The Act contains a detailed list of defined high-risk use cases in Annex III, which can be amended to reflect future developments and will be supplemented with practical guidance and examples from the EU Commission.
  • 36 months after entry into force – likely late 2027 – the Act applies to products already required to undergo third-party conformity assessments. The AI Act takes a slightly different approach for AI systems that are a product, or are intended to be used as a safety component of a product, which is already required to undergo a third-party conformity assessment under EU product regulation. Examples of products in this category range from medical devices to toys, and the full list of relevant EU product regulation is in Annex I of the Act. Existing sector-specific regulators will retain responsibility for enforcing the Act against these products, and these products have an extra year for compliance.
  • AI systems already on the market have an additional period for compliance. High-risk AI systems already on the market will only be regulated by the Act if they are subject to significant changes in their designs. GPAI systems already on the market will have an additional two years to comply.

In parallel, the EU Commission and member states will need to put in place the framework for enforcement of the Act. That includes:

  • Identification, creation and staffing of relevant supervisory authorities. The EU AI Office has been created, to support the governance bodies in EU Member States and enforce the rules for GPAI models. Member states will be required to designate national supervisory authorities to enforce the ban on prohibited systems and the obligations applicable to high-risk AI systems. For a defined list of areas regulated by existing EU product safety legislation, from aviation to medical devices, responsibility for enforcement will rest with existing sector-specific regulators, who will need to ensure they have the appropriate technical expertise to take on this new regulatory burden.
  • Laying down penalties. The AI Act gives Member States the power to set penalties and other enforcement measures, up to a maximum cap (depending on the obligation breached) of up to 7% of total worldwide annual turnover or EUR 35m. These will need to be determined shortly after entry into force of the Act.
  • Providing guidance on compliance with the Act. The Act requires the European Commission to develop guidelines on its practical implementation within 18 months of its entry into force. Member State supervisory authorities will also need to develop guidance on regulatory expectations under the Act.

What do organisations need to do now?

If you haven’t already conducted a risk assessment to identify the impact of the EU AI Act on your business, now is the time to get started – assess your AI systems to determine whether they will be subject to the EU AI Act once it enters into force and becomes applicable, and in which risk category your AI systems will fall.

Of course, compliance with the EU AI Act will be only one part of your Responsible AI governance programme. The EU AI Act may be heralded by the EU as the first comprehensive AI law, but there are many AI related developments being introduced by lawmakers across the world and, of course, regulators are already scrutinizing organizations’ compliance with existing laws when it comes to AI (including with respect to data privacy, consumer protection, and discrimination).

Accordingly, we recommend that you:

  • audit your development and use of AI within the organization and your supply chain;
  • decide what your AI principles and redlines should be (likely to include ethical considerations that go beyond the law including parameters set by the EU AI Act);
  • assess and augment existing risks and controls for AI where required (including to meet applicable EU AI Act requirements), both at an enterprise and product lifecycle level;
  • identify relevant AI risk owners and internal governance team(s);
  • revisit your existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third party services, products and deliverables which may be created using AI (in particular, generative AI systems);
  • assess your existing contract templates and any updates required to mitigate AI risk; and
  • continue to monitor AI and AI adjacent laws, guidance and standards around the world to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.

Baker McKenzie has a team of dedicated experts who can help you with all aspects of EU AI Act compliance, Responsible AI governance and related policies and processes.

Author

Kathy Harford is the Lead Knowledge Lawyer for Baker McKenzie’s global IP, Data & Technology practice.

Author

Karen Battersby is Director of Knowledge for Industries and Clients and works in Baker McKenzie's London office.

Author

Elisabeth is a partner in Baker McKenzie's Brussels office. She advises clients in all fields of IT, IP and new technology law, with a special focus on data protection and privacy aspects. She regularly works with companies in the healthcare, finance and transport and logistics sectors.

Author

Dr. Lukas Feiler, SSCP, CIPP/E, has more than eight years of experience in IP/IT and is a partner and head of the IP and IT team at Baker McKenzie • Diwok Hermann Petsche Rechtsanwälte LLP & Co KG in Vienna. He is a lecturer for data protection law at the University of Vienna Law School and for IT compliance at the University of Applied Science Wiener Neustadt.

Author

José María Méndez es socio responsable del área de Propiedad Intelectual y Tecnologías de la Información y Comunicaciones de Baker & McKenzie Madrid. Anteriormente, fue socio del área de Propiedad Intelectual en un despacho internacional, así como secretario general adjunto de Sogecable y director de la asesoría jurídica del área de cinematografía y televisión. Participa con frecuencia en actividades sin ánimo de lucro de organizaciones como Caritas Diocesanas y Aldeas Infantiles. Asimismo, imparte clases en el Máster de Propiedad Intelectual de la Universidad Carlos III.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Francesca Gaudino is the Head of Baker McKenzie’s Information Technology & Communications Group in Milan. She focuses on data protection and security, advising particularly on legal issues that arise in the use of cutting edge technology.

Author

Florian Tannen is a partner in the Munich office of Baker McKenzie. He advises on all areas of contentious and non-contentious information technology law, including internet, computer/software and data privacy law.

Author

Prof. Dr. Michael Schmidl is co-head of the German Information Technology Group and is based in Baker McKenzie's Munich office. He is an honorary professor at the University of Augsburg and specialist lawyer for information technology law (Fachanwalt für IT-Recht). He advises in all areas of contentious and non-contentious information technology law, including internet, computer/software, data privacy and media law. Michael also has a general commercial law background and has profound experience in the drafting and negotiation of outsourcing contracts and in carrying out compliance projects.

Author

Ben works with clients on matters involving the cross-over space of media, IP and technology. His practice has a particular focus on artificial intelligence, data protection, copyright and technology disputes. He has a particular expertise in intermediary liability issues.

Author

Kate handles all aspects of cross-border tax structuring for UK-headed and non-UK headed groups. She is particularly knowledgeable in UK DPT provisions, CFC regime, loan relationship anti-avoidance provisions and the new interest limitation rules. She regularly assists multinational companies with the design of their global holding and financing structures.

Author

Cristina Duch is a partner in the Intellectual Property Practice Group in Barcelona and leader of the EMEA Brand Enforcement & Disputes Practice Group of the Firm. She has significant experience in a wide range of intellectual property matters, with particular emphasis on trademarks, designs, unfair competition and advertisement. She is a member of the Spanish Institute of Chartered Industrial Property Agents.

Author

Sue is a Partner in our Technology practice in London. Sue specialises in major technology deals including cloud, outsourcing, digital transformation and development and licensing. She also advises on a range of legal and regulatory issues relating to the development and roll-out of new technologies including AI, blockchain/DLT, metaverse and crypto-assets.