On June 14 2023, the European Parliament voted to adopt its negotiating position on the Artificial Intelligence (AI) Act. This is the latest development in a series of moves made by EU institutions and lawmakers towards a harmonised framework of regulation for AI. These moves come in the wake of the European Commission’s 2021 Proposal for a Regulation on AI. In a press conference subsequent to the vote, co-rapporteurs Brando Benifei (S&D, Italy) and Dragos Tudorache (Renew, Romania) explained that the regulation will focus on “the European values of democracy, fundamental rights and the rule of law.” As the Act is expected to be the first example of a comprehensive attempt at AI regulation on the global stage, it is widely anticipated to set the tone for AI regulation internationally.

This article sets out some of the key takeaways from the European Parliament’s negotiating position.

Key Points:

  • Defining AI: The European Parliament has voted to bring the definition of AI in line with the definition used by the OECD, hoping to capture future innovations in the field while excluding traditional computational processes. An AI system shall be defined as:

“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

  • Risk-based Classification: The risk-based approach is key. Obligations will be imposed on technology producers and deployers based on the risk category in which their technology fits. Technologies that pose “unacceptable” levels of danger will be forbidden, while “high-risk” technologies will face heavy restrictions. Notably, the list of prohibited technologies now includes biometric identification systems as well as any other systems that use purposely manipulative techniques or social scoring, such as predictive police systems and emotional recognition systems.
  • General Purpose and Generative AI: Those looking to provide and deploy both general purpose AI and generative AI will face specific transparency and safety constraints. To limit threats to areas such as health, safety, human rights, and democracy, providers of general purpose AI must utilise protections in stages such as design and testing. This entails assessing and mitigating hazards, as well as registering models in an EU database. Providers of generative AI must declare the usage of AI to generate text and must adhere to stricter transparency requirements. This includes requirements to disclose the use of generative AI to generate text and to publicise data used to train AI models in cases where the data is protected by intellectual property.
  • Penalties: Notably, the ceiling for penalties related to prohibited practices has been raised to up to €40 million or 7% of a company’s annual global revenue.
  • Promoting Innovation: Research exemptions will exist to promote innovation and the use of open-source licenses. Public authorities will also be enabled to create regulatory sandboxes for testing AI prior to deployment.  

What Next?

Final trilogue negotiations between the Commission, the Council and the Parliament to reconcile proposals made by each body will now commence. This process is being expedited and the European Commission is expecting the end of negotiations by the end of 2023, ahead of elections in 2024. You can find more on what is next for the EU AI Act here.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Eman Al Suood is a Summer Clerk at Baker McKenzie London.