The UK government has published its long-waited response to the AI White Paper, “A pro-innovation approach to AI regulation” published back in March 2023. Here’s what you need to know.

1. Endorsement of principles-based approach

The UK is sticking with its principles-based regulatory framework for AI, continuing to take a very different path to its EU neighbours (as discussed in our previous alert). Unlike the EU there will be no new AI regulator, new legislation or new penalties at this time. Indeed, the UK government states that in response to “widespread support” to its “pro-innovation” approach to regulating AI, it remains committed to “a context-based approach that avoids unnecessary blanket rules that apply to all AI technologies regardless of how they are used.”

As a reminder, the UK government has identified the following 5 principles that it expects UK regulators to interpret and apply within their remit.

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The government’s view is that a non-statutory approach to AI allows for more flexibility, which means that these principles will not be binding. However, this approach will remain under review. The UK government is not ruling out targeted binding measures in the future (indeed it suggests in the response that future regulation is inevitable): “we will legislate when we are confident it is the right thing to do“.  

Of course, this is the current administration’s approach to AI regulation – the UK has an election coming up later in 2024 and it is possible that a new administration may reassess this position, with the Labour party (currently ahead in the polls) indicating that it would implement a ‘stronger regulatory framework’ than that proposed by the current government.

2. The regulation of highly capable general-purpose AI systems

It remains the government’s view that for the large majority of AI systems, it will be more effective to focus on how AI is used within a specific context than to regulate specific technologies.

However, the government has recognised the risks of gaps when it comes to “highly capable general-purpose AI systems” (i.e. foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models). The government’s response indicates that targeted mandatory interventions may be required but there are no immediate plans to propose such measures. This means that the voluntary measures adopted by industry will remain the only measures focused on foundation models for the time being.

However, to inform the government’s evaluation of how effectively these voluntary measures address AI risks and ensure AI safety, the new AI Safety Institute will lead testing of next generation AI models in the UK. Importantly, the government has made it clear that the goal of such evaluations will not be to deem a model “safe”, and that the AI Safety Institute is not a regulatory body. However, the government has said that if the AI Safety Institute identifies a potentially dangerous capability through its evaluation of advanced AI systems, the Institute may address risks by engaging developers on suitable safety mitigations and collaborating with the government’s AI risk management and regulatory architecture.

The government will provide an update on its approach to highly capable general-purpose AI systems by the end of 2024 (i.e. past the next election..)

3. Known AI risks and upcoming legal and regulatory activity

The response touches on some known AI risks and related developments.

  • Intellectual property –The government has confirmed that a working group of rightsholders and AI developers set up by the Intellectual Property Office “will not be able to agree an effective voluntary code” on AI and copyright. The Government’s response indicates that it will explore “…mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models” and that further proposals on the way forward will be set out “soon” but what those proposals look like remains unclear.
  • Data protection – The government notes that the UK’s data protection framework, which is being reformed through the Data Protection and Digital Information Bill (“DPDI“), will complement their current approach to regulating AI. The DPDI aims to expand on and simplify the current rules on automated decision-making which are “confusing and complex”.
  • Competition –The Digital Markets, Competition and Consumers Bill, which is currently progressing through Parliament will give the CMA additional tools to identify and address any competition issues in AI markets and other digital markets affected by recent developments in AI.
  • Misinformation: The Online Safety Act 2023 places new responsibilities on online service providers and captures attempts by foreign state actors to manipulate information.
  • Security – The National Cyber Security Centre (“NCSC“) published guidelines for secure AI system development in November 2023. The government has indicated that it will release a call for views in spring 2024 to obtain further input into securing AI models which will include a potential Code of Practice for cyber security of AI, based on the NCSC’s guidelines. In addition, the government’s response also highlights the security regime in the Product Security and Telecommunications Infrastructure Act (“PSTI Act“) which is scheduled to come into effect in 2024. The PSTI Act will require manufacturers of consumer connectable products (e.g. AI-enabled smart speakers) to comply with minimum security requirements.

4. Guidance for regulators

Alongside the White Paper response, the UK government has provided initial guidance for UK regulators on how to interpret and apply the AI principles. Further updated guidance will be issued by summer 2024. The guidance is not intended to be a prescriptive guide and how the principles are considered will ultimately be at each regulators’ discretion. Regulators are encouraged to develop tools and guidance.  The government notes that certain UK regulators have already published AI guidance, for example the Information Commissioners Office (“ICO“) and Competition and Markets Authority (“CMA“). Remaining  regulators have been asked to publish an update outlining their strategic approach to AI by 30 April 2024.

The government will continue to evaluate any potential gaps in existing regulatory powers and remits. It will also provide support to regulators through a new £10 million fund for new tools and research projects, as well as the DRCF AI and Digital Hub. (Of course, £10 million across all regulators is hardly generous and it remains to be seen how this will be allocated.)

5. Centralised AI function within government

Recognising that individual regulators cannot successfully address opportunities and risks presented by AI in isolation (concerns had been raised regarding a risk of regulatory overlaps, gaps and poor coordination across the various UK regulators), a central function will be established within the UK government to “support effective risk monitoring, regulator coordination, and knowledge exchange“.

The government has started undertaking cross-sectoral risk monitoring and has committed to launching a targeted consultation on a cross-economy AI risk register during 2024. The aim of the register will be to provide a single source of truth on AI risks which regulators, government departments, and external groups can use.

In addition, the government is considering the added value of developing a specific risk management framework for AI, similar to the one developed in the US by the National Institute of Standards and Technology (NIST).

Practical Impact

For organizations developing or deploying AI systems in the UK, they may be relieved by the news that they won’t have to comply with a prescriptive mandatory regime for AI. However, companies will need to ensure  compliance with existing law governing AI development and deployment, plus keep a close eye out for AI updates from all applicable UK regulators. Despite the government’s best efforts, there is certainly scope for divergence in the approach taken by different regulators, which may prove challenging.

The ICO, for example, has already initiated official enforcement and investigations where the deployment of AI includes personal data processing (and with particular interest in some areas, for example, biometrics and children’s data).  So there is a notable ‘beware’ sign. Whilst it may appear that the general approach will be one of principles as opposed to prescriptive AI legislation in the UK, this should not be taken to mean that AI won’t attract scrutiny or enforcement risk.

Also, although the UK has intentionally taken a very different approach from the EU AI Act, if you are a UK based organisation that has cross-border operations in the EU, you will still need to assess the potential impact on the EU AI Act on your business. For the latest on the progress of the EU AI Act, please check out our latest EU AI Act update.

And for global businesses, you will need to build a Responsible AI governance framework that takes into account an increasingly complex regulatory framework of local AI regulation in different jurisdictions and can be flexed to take account of new developments.

It’s only February and we have already had EU and UK updates on AI regulation. Who’s next? To keep up-to-date on all AI news, follow our Connect on Tech blog.

Author

Sue is a Partner in our Technology practice in London. Sue specialises in major technology deals including cloud, outsourcing, digital transformation and development and licensing. She also advises on a range of legal and regulatory issues relating to the development and roll-out of new technologies including AI, blockchain/DLT, metaverse and crypto-assets.

Author

Karen Battersby is Director of Knowledge for Industries and Clients and works in Baker McKenzie's London office.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Jessica is an Associate in the Baker McKenzie London team. Jessica's practice covers a range of commercial, technology and communications matters. She has advised on complex telecommunications and regulatory issues, and has otherwise also been involved in a number of outsourcing negotiations.

Author

Kathy Harford is the Lead Knowledge Lawyer for Baker McKenzie’s global IP, Data & Technology practice.