Last week was incredibly busy in terms of AI governance developments, on both sides of the Atlantic and globally. 

While countries are racing to demonstrate their leadership on AI governance on the global stage, common themes continue to emerge. Policymakers recognise the need to strike a balance between the opportunities and potential that this technology can bring, on the one hand, and the risks and challenges posed by AI, on the other hand. There is an increasing acknowledgement that developing safe, secure, and trustworthy AI with humankind at the centre can only be achieved through global cooperation.

In case you are struggling to keep up, here is a summary of some of the key AI developments last week.

  1. G7 International Guiding Principles on AI and AI Code of Conduct (30 October);
  2. US Executive Order on Safe, Secure and Trustworthy AI (30 October);
  3. UK AI Safety Summit and the Bletchley Declaration (1-2 November);
  4. US Safety Institute (1 November);
  5. UK Safety Institute (2 November);
  6. European AI Office (2 November).

***

The leaders of the G7 reached an agreement on International Guiding Principles on AI and a voluntary Code of Conduct for AI developers. (The G7 nations are Canada, France, Germany, Italy, Japan, UK, USA and EU).

The European Commission, welcomed the Principles and Code of Conduct, indicating that these are consistent with and will complement the proposed EU AI Act currently in its trialogue process.

The Guiding Principles and Code of Conduct have been designed as living documents aimed at evolving over time, building on existing  OECD AI Principles adopted in 2019 to promote use of AI that is innovative and trustworthy and respects human rights and democratic values.  They provide guidance for organisations developing, deploying and using advanced AI systems.

The non-exhaustive Guiding Principles apply to all AI actors in the design, development, deployment and use of advanced AI systems, and include the following:

  • Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
  • Monitor, identify, document and mitigate identified risks and vulnerabilities in AI systems in collaboration with other stakeholders.
  • Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increased accountability.
  • Work towards responsible information sharing and reporting of AI related incidents.
  • Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach.
  • Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
  • Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  • Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
  • Prioritize the development of advanced AI systems to address the world’s greatest challenges.
  • Advance the development of and, where appropriate, adoption of international technical standards.
  • Implement appropriate data input measures and protections for personal data and intellectual property.

The Code of Conduct  contains a non-exhaustive list of voluntary guidance for actions by organizations developing the most advanced AI systems. These actions should be implemented in line with a risk-based approach in all stages of the lifecycle, including in the design, development, deployment and use of advanced AI systems. It covers the following actions:

  • Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
  • Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment, including placement on the market.
  • Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increased accountability.
  • Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems, including with industry, governments, civil society, and academia.
  • Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.
  • Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
  • Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  • Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
  • Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
  • Advance the development of and, where appropriate, adoption of international technical standards.
  • Implement appropriate data input measures and protections for personal data and intellectual property.

President Biden issued a 63-page Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, to define the trajectory of artificial intelligence adoption, governance and usage within the United States government. The Executive Order outlines eight guiding principles and priorities for US federal agencies to adhere to as they adopt, govern and use AI.  Our US AI team explain more in this recent alert.

The Bletchley Declaration was published on the opening day of the AI Safety Summit hosted at Bletchley Park in the UK. Signed by 28 countries (Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, KSA, Netherlands, Nigeria, The Philippines, Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Türkiye, Ukraine, UAE, UK and USA)  and the EU, the Declaration is the first time leading nations have made a public commitment to co-operate on AI safety.

Signatories to the Bletchley Declaration have agreed that risks arising from AI are inherently international in nature, and will be best addressed through international cooperation. Signatories have resolved to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI.

Note that the AI Safety Summit and the Bletchley Declaration focused solely on ‘Frontier AI’ – a subset of AI focused on highly advanced general-purpose AI models, including foundation models, that boast capabilities at par with or surpassing the most sophisticated contemporary system (e.g. narrower than the scope of the EU AI Act). The organisers considered that frontier AI systems pose significant safety risks, particularly in domains like cybersecurity and biotechnology. Concerns arise from the potential for misuse, control issues, and the amplification of risks such as disinformation.

The Declaration is not legally binding and more symbolic than a detailed roadmap. However, signatories  agreed to:

  • identify AI safety risks of shared concern, build a shared scientific and evidence-based understanding of these risks, and sustain that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies; and
  • build respective risk-based policies across their countries to ensure safety in light of such risks, collaborating as appropriate while recognising approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

Demonstrating that the Summit was simply the first step towards closer alignment, the Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. France will then host the next in-person Summit in November 2024.

Vice President Kamala Harris announced a series of U.S. initiatives to advance the safe and responsible use of AI, including:

  1. Establishing the United States AI Safety Institute (US AISI) inside NIST, which will develop technical guidance and  enable information-sharing and research collaboration with peer institutions internationally, including the UK’s AI Safety Institute
  2. Releasing draft policy guidance on the use of AI by the U.S. government
  3. 31 nations have joined the United States in endorsing the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy and calling on others to join
  4. Announcing a New Funders Initiative to advance AI in the public interest.

The UK announced that its Frontier AI Taskforce will now evolve to become the AI Safety Institute, a new global hub based in the UK tasked with testing the safety of emerging types of AI. The AISI will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely.

In her speech at the AI Safety Summit, President von der Leyen identified four pillars on which the framework to understand and mitigate the risks of very complex AI systems should be built.

  • First pillar : an independent scientific community equipped with the means to evaluate AI systems, including public funding and access to the best supercomputers.
  • Second pillar : internationally accepted procedures and standards for testing AI safety.
  • Third pillar : standard reporting procedure for significant incident caused by errors or misuse of AI.
  • Fourth pillar : international system of alerts fed by trusted flaggers.

According to President von der Leyen, these pillars should be accompanied with solid, traceable corporate responsibility of organizations, and binding principled rules and oversight by public authorities. In that respect, President von der Leyen announced a proposed European AI Office that would deal with the most advanced AI models, work with the scientific community at large, have oversight responsibility, and enforce the common rules across all 27 EU Member States.

***

Responsible AI in action

Many businesses are finding it difficult to navigate the increasingly crowded international regulatory landscape on AI. The developments this week signal closer international alignment around AI governance and standards, which is welcome, but it’s likely to be a long road ahead. In the meantime, businesses will continue to focus on AI as a strategic imperative. Accordingly, while the policy makers continue to contemplate whether there should be new rules on AI and what they should be, organisations can’t stand still. Companies will need to continue to drive forward their own responsible AI governance in order to develop and deploy AI safely, including:

  • auditing their development and use of AI within the organization and their supply chain;
  • deciding what their AI principles and redlines should be (likely to include ethical considerations that go beyond the law);
  • assessing and augmenting existing risks and controls for AI where required (including to meet applicable EU Act requirements for any AI systems destined for the EU market), both at an enterprise and product lifecycle level;
  • identifying relevant AI risk owners and internal governance team(s);
  • revisiting existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third party services, products and deliverables which may be created using AI (in particular, generative AI systems);
  • assessing existing contract templates and any updates required to mitigate AI risk; and
  • continuing to monitor AI and AI adjacent laws, guidance and standards to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.

***

Baker McKenzie’s recognized leaders in AI are supporting multinational companies with strategic guidance for responsible and compliant AI development and deployment. Our industry experts with experience in technology, data privacy, intellectual property, cybersecurity, trade compliance and employment can meet you at any stage of your Responsible AI journey to unpack the latest trends in legislative and regulatory proposals and the corresponding legal risks and considerations for your organization. Please contact a member of our team to learn more.

Author

Elisabeth is a partner in Baker McKenzie's Brussels office. She advises clients in all fields of IT, IP and new technology law, with a special focus on data protection and privacy aspects. She regularly works with companies in the healthcare, finance and transport and logistics sectors.

Author

Sue is a Partner in our Technology practice in London. Sue specialises in major technology deals including cloud, outsourcing, digital transformation and development and licensing. She also advises on a range of legal and regulatory issues relating to the development and roll-out of new technologies including AI, blockchain/DLT, metaverse and crypto-assets.

Author

Samir is an English qualified Solicitor of the Senior Courts of England and Wales and a registered Legal Adviser with the Dubai Government Legal Affairs Department. He is a counsel in the Firm’s Financial Regulatory and Investigations, Compliance & Ethics (IC&E) practices based in Dubai as well as FinTech and AI lead in the Middle East and North Africa (MENA), with ten years’ experience in the region.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.