Background

On September 5, 2024, the Council of Europe’s 46 member states, along with eleven non-members (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the US and Uruguay), signed the Council of Europe’s Framework Convention on Artificial Intelligence, the first legally binding international treaty to address AI technologies (the “Convention”). The Convention aims to ensure that AI is consistent with core principles like human rights, democracy and the rule of law, while promoting innovation in the field.

Scope

The Convention governs the development, deployment and use of an “artificial intelligence system”, which it defines as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments.” This definition aligns with the OECD’s latest definition of AI adopted in November 2023.

The Convention applies primarily to public authorities, as well as private entities acting on their behalf. However, the Convention also directs signatories to “address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors” by applying the Convention’s requirements to private actors or “by taking other appropriate measures”. Systems relating to the protection of national security, research and development activities, and matters of national defense are broadly excepted from the Convention.

Requirements

The Convention establishes an obligation to ensure that AI activities are consistent with human rights obligations enshrined in international and domestic laws. Parties are required to adopt measures to prevent AI systems from being used to undermine the integrity of democratic institutions and processes. The Convention also outlines general principles that parties are required to implement in regard to AI, including:

  • Human dignity and individual autonomy
  • Transparency and oversight
  • Accountability and responsibility
  • Equality and non-discrimination
  • Reliability
  • Privacy and personal data protection; and
  • Safe innovation

Parties must also implement measures to identify, assess, prevent and mitigate AI risks. These measures are to take account of the context and intended use of AI systems and the severity and likelihood of their potential impacts, consider the perspectives of stakeholders as appropriate, apply iteratively throughout the AI lifecycle, include monitoring for and documentation of risks and adverse impacts, and may require testing of AI systems before they are made available.

Discussion

While some of its aspects draw upon the recent EU AI Act, the Convention—being a creature of international law—differs from the AI Act in that it affords signatories considerable flexibility. The Convention does not prescribe specific measures but articulates broad goals, leaving countries to decide what laws or mechanisms to put into place to implement the Convention’s objectives.

Despite this lack of specific requirements, the Convention, as the first multilateral treaty addressing AI, represents an important landmark in global AI regulation. While the Convention suggests that there is no one-size-fits-all approach to regulating AI, it also shows there is an interest in global consensus forming around the broad-stroke policy considerations concerning AI.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cynthia J. Cole is Chair of Baker McKenzie’s Global Commercial, Tech and Transactions Business Unit, a member of the Firm’s global Commercial, Data, IP and Trade (CDIT) practice group steering Committee and Co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Avi Toltzis is a Knowledge Lawyer in Baker McKenzie's Chicago office.