Background
On September 5, 2024, the Council of Europe’s 46 member states, along with eleven non-members (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the US and Uruguay), signed the Council of Europe’s Framework Convention on Artificial Intelligence, the first legally binding international treaty to address AI technologies (the “Convention”). The Convention aims to ensure that AI is consistent with core principles like human rights, democracy and the rule of law, while promoting innovation in the field.
Scope
The Convention governs the development, deployment and use of an “artificial intelligence system”, which it defines as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments.” This definition aligns with the OECD’s latest definition of AI adopted in November 2023.
The Convention applies primarily to public authorities, as well as private entities acting on their behalf. However, the Convention also directs signatories to “address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors” by applying the Convention’s requirements to private actors or “by taking other appropriate measures”. Systems relating to the protection of national security, research and development activities, and matters of national defense are broadly excepted from the Convention.
Requirements
The Convention establishes an obligation to ensure that AI activities are consistent with human rights obligations enshrined in international and domestic laws. Parties are required to adopt measures to prevent AI systems from being used to undermine the integrity of democratic institutions and processes. The Convention also outlines general principles that parties are required to implement in regard to AI, including:
- Human dignity and individual autonomy
- Transparency and oversight
- Accountability and responsibility
- Equality and non-discrimination
- Reliability
- Privacy and personal data protection; and
- Safe innovation
Parties must also implement measures to identify, assess, prevent and mitigate AI risks. These measures are to take account of the context and intended use of AI systems and the severity and likelihood of their potential impacts, consider the perspectives of stakeholders as appropriate, apply iteratively throughout the AI lifecycle, include monitoring for and documentation of risks and adverse impacts, and may require testing of AI systems before they are made available.
Discussion
While some of its aspects draw upon the recent EU AI Act, the Convention—being a creature of international law—differs from the AI Act in that it affords signatories considerable flexibility. The Convention does not prescribe specific measures but articulates broad goals, leaving countries to decide what laws or mechanisms to put into place to implement the Convention’s objectives.
Despite this lack of specific requirements, the Convention, as the first multilateral treaty addressing AI, represents an important landmark in global AI regulation. While the Convention suggests that there is no one-size-fits-all approach to regulating AI, it also shows there is an interest in global consensus forming around the broad-stroke policy considerations concerning AI.