On February 19, 2020 the ICO published its draft guidance on the AI auditing framework for public consultation, which is open until April 1, 2020. We have summarised the key themes below.

What is the draft guidance?

  • The draft guidance, which runs to over 100 pages, provides advice and recommendations on how to understand data protection law in relation to artificial intelligence. It clarifies how to assess the data protection risks posed by AI and identifies technical and organisational measures that can be put in place to help mitigate these risks.
  • The draft guidance is not meant to impose additional legal obligations which go beyond the General Data Protection Regulation, but provides guidance and practical examples on how organisations can apply data protection principles in the context of AI. It also sets out the auditing tools that the ICO will use in its own audits and investigations on AI.
  • The ICO has identified AI as one of its top three strategic priorities, and has issued previous guidance on AI, via its Big Data, AI, and Machine Learning report, and the explAIn guidance produced in collaboration with the Alan Turing Institute. The new draft guidance has a broad focus on the management of several different risks arising from AI systems, and is intended to complement the existing ICO resources.
  • The draft guidance applies broadly and will be of interest both to organisations that design, build and deploy their own AI systems and those that use AI developed by third parties.

Key Themes

The draft guidance focuses on four key areas:

1. Accountability and governance – The draft guidance highlights that the accountability principle requires that companies must be responsible for the compliance of their AI system with data protection requirements. They must assess and mitigate the risks posed by such systems, document and demonstrate how the system is compliant and justify the choices they have made. There is a strong focus on the importance of data protection impact assessments (DPIAs) in the draft guidance, and the ICO notes that organisations are under a legal obligation to complete a DPIA if they use AI systems to process personal data, and this should not be viewed as a mere “box ticking” exercise.

2. Fair, lawful and transparent processing – The draft guidance sets out specific recommendations and guidance on how the lawfulness, fairness and transparency principles apply in the context of AI systems and includes practical examples of controls which can be implemented to ensure systems adequately address these principles. For instance, the draft guidance suggests specific methods to address bias and discrimination in AI models, such as using balanced training data (e.g. by adding data on underrepresented subsets of the population). The draft guidance also highlights that a system’s performance should be monitored on an ongoing basis and policies should set out variance limits for accuracy and bias above which the systems should not be used.

3. Data minimisation and security – The draft guidance highlights that using AI to process personal data can exacerbate known security risks and includes specific recommendations to address these increased risks. The draft guidance also stresses that particular care must be taken to comply with the data minimisation principle in view of the large data sets required to train AI, and recommends a number of techniques to ensure that AI models only process the personal data that is adequate, relevant and limited to what is necessary. For example, by removing features from a training data set that are not relevant to the purpose of the model.

4. The exercise of individual rights – The draft guidance addresses the specific challenges that AI systems pose to ensuring individuals have effective mechanisms for exercising their personal data rights, and sets out practical examples and guidance on how these rights will apply in practice in the context of AI. For example, the draft guidance confirms that requests for access, rectification or erasure of training data should not be considered unfounded or excessive simply because they may be more difficult to fulfil (for example in the context of a large training data set). However, the ICO does clarify that as there is no obligation to collect or maintain additional personal data just to enable the identification of individuals within training data for the sole purposes of complying with these requests, there could be times when it is not possible to fulfil a request.

We have produced a more detailed summary of these key themes, which you can read here.