On February 19, 2020 the ICO published its draft guidance on the AI auditing framework for public consultation, which is open until April 1, 2020. We have summarised the key themes below.

What is the draft guidance?

  • The draft guidance, which runs to over 100 pages, provides advice and recommendations on how to understand data protection law in relation to artificial intelligence. It clarifies how to assess the data protection risks posed by AI and identifies technical and organisational measures that can be put in place to help mitigate these risks.
  • The draft guidance is not meant to impose additional legal obligations which go beyond the General Data Protection Regulation, but provides guidance and practical examples on how organisations can apply data protection principles in the context of AI. It also sets out the auditing tools that the ICO will use in its own audits and investigations on AI.
  • The ICO has identified AI as one of its top three strategic priorities, and has issued previous guidance on AI, via its Big Data, AI, and Machine Learning report, and the explAIn guidance produced in collaboration with the Alan Turing Institute. The new draft guidance has a broad focus on the management of several different risks arising from AI systems, and is intended to complement the existing ICO resources.
  • The draft guidance applies broadly and will be of interest both to organisations that design, build and deploy their own AI systems and those that use AI developed by third parties.

Key Themes

The draft guidance focuses on four key areas:

1. Accountability and governance – The draft guidance highlights that the accountability principle requires that companies must be responsible for the compliance of their AI system with data protection requirements. They must assess and mitigate the risks posed by such systems, document and demonstrate how the system is compliant and justify the choices they have made. There is a strong focus on the importance of data protection impact assessments (DPIAs) in the draft guidance, and the ICO notes that organisations are under a legal obligation to complete a DPIA if they use AI systems to process personal data, and this should not be viewed as a mere “box ticking” exercise.

2. Fair, lawful and transparent processing – The draft guidance sets out specific recommendations and guidance on how the lawfulness, fairness and transparency principles apply in the context of AI systems and includes practical examples of controls which can be implemented to ensure systems adequately address these principles. For instance, the draft guidance suggests specific methods to address bias and discrimination in AI models, such as using balanced training data (e.g. by adding data on underrepresented subsets of the population). The draft guidance also highlights that a system’s performance should be monitored on an ongoing basis and policies should set out variance limits for accuracy and bias above which the systems should not be used.

3. Data minimisation and security – The draft guidance highlights that using AI to process personal data can exacerbate known security risks and includes specific recommendations to address these increased risks. The draft guidance also stresses that particular care must be taken to comply with the data minimisation principle in view of the large data sets required to train AI, and recommends a number of techniques to ensure that AI models only process the personal data that is adequate, relevant and limited to what is necessary. For example, by removing features from a training data set that are not relevant to the purpose of the model.

4. The exercise of individual rights – The draft guidance addresses the specific challenges that AI systems pose to ensuring individuals have effective mechanisms for exercising their personal data rights, and sets out practical examples and guidance on how these rights will apply in practice in the context of AI. For example, the draft guidance confirms that requests for access, rectification or erasure of training data should not be considered unfounded or excessive simply because they may be more difficult to fulfil (for example in the context of a large training data set). However, the ICO does clarify that as there is no obligation to collect or maintain additional personal data just to enable the identification of individuals within training data for the sole purposes of complying with these requests, there could be times when it is not possible to fulfil a request.

We have produced a more detailed summary of these key themes, which you can read here.

Author

Ben works with clients on matters involving the cross-over space of media, IP and technology. His practice has a particular focus on artificial intelligence, data protection, copyright and technology disputes. He has a particular expertise in intermediary liability issues.

Author

Ben advises clients in a wide range of industry sectors, focusing in particular on data protection compliance, including healthcare, financial services, adtech, video games, consumer and business-to-business organisations. Ben regularly assists clients with global data protection compliance projects and assessments as well as specific data protection challenges such as international transfers and data security breaches. Ben is also regularly involved in drafting and negotiating data protection clauses in agreements for various clients in a wide range of industry sectors. Ben also regularly advises clients on electronic direct marketing and cookies.

Author

Joanna advises on a wide range of technology and commercial agreements and matters. Her practice focuses on regulatory issues, especially data protection, consumer law, and advertising and marketing, and she regularly advises clients on these areas in particular.

Author

Cara is an senior associate in Baker McKenzie's Technology team, based in London. Her practice encompasses aspects of commercial, technology and intellectual property law. She has a particular focus on advising product counsel, regulatory and data protection issues.