AI is all the rage right now, especially generative AI, which generates user content (text, photos, audio, and video) based only on prompts. Some contend that generative AI’s ‘prompt’ model has sparked a technological revolution, so it’s not surprising that a slew of companies want to be a part of it. However, generative AI tools are only as good as the data they are fed as their models are trained, using a repository of existing data, to generate new and original content. As a result, the potential of generative AI tools to analyse huge volumes of data to generate highly personalised outputs poses substantial problems to securing personal and sensitive information and this has sparked an increased regulatory oversight by the ICO.

What the ICO has to say:

Whilst generative AI undoubtedly presents a wealth of opportunities, the Information Commissioner’s Office (ICO), like many others, is cognisant of the privacy risks associated with these models and warns that businesses should ensure the necessary privacy measures are in place before using such tools. These risks mirror those that have been at the forefront of data protection concerns in recent years, such as collection and processing of personally identifiable information. To address these concerns, on Thursday 15 June 2023, the ICO called for ‘businesses to address the privacy risks generative AI can bring before rushing to adopt the technology.’ Such risks are not new from a data protection perspective: policymakers have grappled with data protection regulation for years. However, the regulatory landscape emerging as a result of the ICO announcement sends a refreshed and clear message; the UK is taking a controlled stance with respect to regulatory frameworks and other issues which interfere with AI.

Under this controlled stance, the ICO will focus on organisations that are not following the law and has put in place steps to ensure organisations using generative AI are doing so in a compliant fashion. These steps are not out of the ordinary but present a similar tick-box exercise around determining the appropriate lawful basis for processing, clarifying the controller and processor roles for better apportionment of obligations, conducting DPIAs, mitigating security risks, ensuring all the data protection principles are adhered to and taking note of the extra obligations in circumstances where the AI is used to generate solely automated decisions.

Further, the ICO requires that organisations will adapt such data protection measures to fit the needs of their business, and will do so before implementing any generative AI tools. Organisations should therefore carry out a full risk assessment and must ensure they address both business specific risks, as well as adhering to general regulatory requirements. In particular, where organisations conduct automatic processing of data through generative AI, they must consider data protection rules The ICO is clearly placing the onus on organisations to ensure their privacy measures are targeted and commensurate. Further, the context dependent angle the ICO is adopting reiterates the scrutiny organisations will face in this regulatory landscape; so a blanket approach to privacy risks will not suffice. 

The ICO’s position in this field continues to be consistent with the UK’s pro-innovation stance set forth in the whitepaper published earlier this year, which defers to regulators to promote and oversee responsible AI activities rather than inventing the wheel with a new AI law as the EU is currently doing. While this stance is encouraging for companies looking to use generative AI effectively, it is clear that the ICO expects them to do so responsibly and with privacy protection as a top priority.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Nicola Russell is a Trainee Solicitor at Baker McKenzie London.