As Artificial Intelligence (AI) technology advances at a rapid pace, global efforts to establish appropriate legislative frameworks are also fast developing. Since the beginning of 2023, there have been announcements from several countries, including the United Kingdom, highlighting the developing legislative actions being implemented to keep pace with this era of emerging technological development. The trend from the UK is that it intends to take a relatively conservative approach to legislation by leaving AI regulation and enforcement to existing sector-specific authorities – at least until now.
On August 31, 2023, the House of Commons published a committee report (‘the Report’) raising 12 challenges to current AI governance (including privacy) and recommending ‘a tightly-focussed AI Bill’ as a better regulatory approach to keep with the pace of AI developments while preserving the UK’s position as a go-to destination for innovative practices.
Key Takeaways from the Report
Challenges to current AI governance: The Report outlined twelve difficulties that the UK’s AI governance must solve, adding that the Government’s response to this Report and its own white paper must lay out how it intends to address each of these challenges in its roadmap. They are:
- Privacy: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
- Bias: AI can introduce or perpetuate biases that society finds unacceptable.
- Misrepresentation: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character
- Access to Data: The most powerful AI needs very large datasets, which are held by few organisations
- Intellectual Property and Copyright: Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- Access to Compute: The development of powerful AI requires significant compute power, access to which is limited to a few organisations
- Black Box: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements
- Open-Source: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
- Liability: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done
- Employment: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption
- International Coordination: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking
- The Existential challenge: Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security.
Call for improvements to the current AI regulatory strategy: As we reported earlier, the UK Government proposed a “pro-innovation approach to AI regulation” in March 2023 in the form of a white paper outlining five principles to frame regulatory activities, guide future development of AI models and technologies, and their usage. According to the white paper, these principles would be understood and turned into action by particular sectoral regulators with assistance from central support functions first given from inside government. While parliament agrees with this approach in its report, it only serves as a starting point for a more comprehensive statute that addresses the gaps highlighted by the problems listed above. The Report went on to suggest that the opportunity for the UK to be one of the go-to places in the world for AI development and deployment is time-limited, given the international race to set the standard in this sector, and that a ‘rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives’ is therefore timely.
What to expect
The UK government is required to respond to this Report within two months after its release (by October 31, 2023). This would be just in time for the worldwide AI safety summit, touted as the first of its kind, to be held in the United Kingdom in early November. A response would reveal whether the UK’s regulatory framework outlined in the AI whitepaper will account for the possibility of a new legislation and regulator. We will continue to monitor these events and will keep you updated when new information becomes available. Watch this space.