The ICO, together with The Alan Turing Institute, recently published its finalised guidance on explaining decisions made with AI, following a public consultation which closed in January this year.  

Who should read this?

  • The guidance is relevant for any organisation using, or thinking of using, AI to support or make decisions about individuals (including if you are procuring an AI system from a third party).
  • It will be of particular use for DPOs, and legal and compliance teams grappling with how to satisfy transparency and accountability requirements in the context of AI. However, specific sections are also aimed at technical teams and senior management as well, emphasising the importance of considering how to explain AI throughout the development and implementation of AI systems.

What is the status of the guidance?

  • The guidance was produced by the ICO in collaboration with the Alan Turing Institute, the UK’s national institute for data science and AI. It is intended to give organisations practical advice on how to explain decisions made or assisted by AI systems processing personal data, to the people affected by them.
  • The guidance is not a statutory code of practice under the Data Protection Act 2018, but aims to clarify how to apply data protection obligations and highlights best practice. It also highlights other legal instruments that may be relevant to good practice in explaining AI-assisted decisions, in particular, the Equality Act 2010.
  • The guidance has been published in response to the Government’s AI Sector Deal, which was published in April 2018 and tasked the ICO and the Turing Institute to work together to develop guidance to assist in explaining AI decisions.
  • This guidance complements existing ICO guidance, such as its Big Data, AI, and Machine Learning report and follows the publication of draft guidance on the AI auditing framework in February 2020 (please see our summary here).

What does the guidance say?

  • The guidance states that “the primary aim of explaining AI-assisted decisions is justifying a particular result to the individual whose interests are affected by it”. As a basic requirement, organisations must demonstrate how those involved in the development of the AI system acted responsibility and make the reasoning behind the outcome of an AI-assisted decision clear. This to satisfy an individual’s right to obtain meaningful information about the logic involved in AI-assisted decision-making, and allow them to express their point of view and contest a decision.
  • The guidance is structured in three parts: the basics of explaining AI; explaining AI in practice; and what explaining AI means for your organisation. We have summarised the key points below. 

Part 1: The basics of explaining AI

  • This section is aimed at DPOs and compliance teams and outlines a number of different types of explanations in relation to AI, and differentiates between process-based explanations (information on the governance of the AI system from design to deployment) and outcome-based explanations (what happened in the case of a particular decision). The guidance includes six main types of explanation, which include:
    • Rationale explanations: the reasons behind a decision, explained in an accessible and non-technical way;
    • Responsibility explanations: who is involved in developing, managing and implementing an AI system, and who to contact for human review of a decision;
    • Data explanations: what data has been used in a particular decision and how;
    • Fairness explanations: steps taken in designing and implementing an AI system to ensure decisions are generally unbiased and fair, and whether or not an individual has been treated equitably;
    • Safety and performance explanations: steps taken in designing and implementing an AI system to maximise the accuracy, reliability, security and robustness of AI decisions; and
    • Impact explanations: steps taken in designing and implementing an AI system regarding how an organisation has considered the impact that the AI system may have on an individual and on wider society.
  • By differentiating between the types of explanation, the guidance attempts to provide an outline for organisations to follow to help identify what information should be documented and included in an explanation.
  • The guidance emphasises the importance of context in determining what information should be included in an explanation, but states that, in most cases, explaining AI-assisted decisions involves identifying what is happening in the AI system and who is responsible. Therefore, rationale and responsibility explanations should generally be prioritised.
  • The guidance includes a summary of five contextual factors which have an effect on what an individual may want to use an explanation for, and impact how to deliver an explanation. These are:
    • The domain you work in: this refers to the setting or sector in which you deploy the AI model (e.g. an individual’s expectations will be different for decisions made in the criminal justice domain compared to other domains like healthcare or financial services).
    • Impact on the individual: this is the effect an AI decision can have on an individual and wider society;
    • Data used: the data used to train and test the AI model as well as the input data at the point of the decision;
    • Urgency of the decision: the importance of receiving or acting upon the outcome of an AI assisted decision within a short time frame; and
    • Audience it is being presented to: the individuals you are explaining an AI decision to, which includes the groups of people you make decisions about, and the individuals within those groups.
  • The guidance also states that organisations should ensure decisions made using AI are explainable by following the four principles below:
    • Be transparent: this is an extension of obligations under the GDPR in relation to lawfulness, fairness and transparency of processing, and is about making the use of AI for decision making obvious to individuals and appropriately explaining the decisions to individuals in a meaningful way; 
    • Be accountable: this is derived from the accountability principle under the GDPR, and concentrates on the process and actions carried out when designing (or procuring/outsourcing) and deploying AI models in terms of demonstrating compliance and data protection by design and default;
    • Consider the context you are operating in: this is about paying attention to different, but interrelated, elements that can have an effect on explaining AI assisted decisions, and managing the overall process. The ICO acknowledges there is no a one size fits all approach to explaining AI assisted decisions, and also flags that considering context is not a one off exercise and should be thought about at all stages of the process; and
    • Reflect on the impact of your AI system on the individuals affected, as well as wider society: this helps to explain to individuals affected by decisions that the use of AI will not harm or impair their wellbeing, and involves asking and answering questions about the ethical purposes and objectives of the AI project at the initial stages, as well as revisiting and reflecting on impacts throughout the development and implementation of the project.

Part 2: Explaining AI in practice

  • This section focuses on the practicalities of how to explain AI decisions, and is primarily intended for technical teams (although DPOs and compliance teams may also find it useful).
  • This section is aimed at helping organisations select the appropriate explanation for their use case, choose an appropriately explainable model and use tools to extract explanations from less interpretable models. It sets out discrete tasks to be completed throughout the development and implementation stages – from building systems to ensure organisations are able to extract relevant information for each explanation type, to selecting priority explanations and considering how to build and present the explanation.
  • The tasks set out in this section are:
    • Selecting priority explanations by considering the domain, use case and impact on the individual;
    • Collect and pre-process data in an explanation aware manner;
    • Building the system to ensure you can extract relevant information for a range of explanation types;
    • Translating the rationale for the system’s results into usable and easily understandable reasons;
    • Preparing implementers (human decision makers) to deploy the AI system, which involves training;
    • Considering how to build and present the explanations, with context being the cornerstone.
  • This section includes some practical recommendations on how to present explanations, recommending the use of: (i) simple graphs and diagrams to help ensure explanations are clear and easy to understand; and (ii) a layered approach to help avoid information fatigue. Under a layered approach, an organisation proactively provides individuals first with the explanations which have been prioritised based on context, and makes additional explanations available in further layers (e.g. by use of expanding sections, tabs or links to webpages with additional explanations).

Part 3: What explaining AI means for your organisation

  • This section details the internal measures that can be put in place to ensure an organisation is able to provide meaningful explanations to individuals affected by decisions made by AI, and is primarily intended for senior executives, but DPOs and compliance teams may also find it useful.
  • The guidance emphasises that anyone involved in the decision making pipeline has a role in contributing to the explanation of a decision supported by an AI model’s result. This includes product managers, the AI development team (which could also be a third party provider), implementers (humans in the loop if the decision is not fully automated), compliance teams and DPOs, and senior management with overall responsibility for ensuring the AI system is appropriately explainable to the recipient of the decision.
  • It focuses on the roles, policies, procedures and documentation needed to ensure meaningful explanations can be provided to individuals. For example, it recommends documenting how each stage of an organisation’s use of AI contributes to building an explanation, and organising such documentation to ensure relevant information can be easily accessed and understood by those providing explanations to decision recipients. It also highlights that organisations should ensure they have a designated and capable human point of contact for individuals to contact to query or contest a decision.
  • The guidance also highlighted that if you are sourcing an AI system (or significant parts of it) from a third party supplier, as a data controller you will have primary responsibility for ensuring the AI system you are using is capable of producing an appropriate explanation for the recipient of the decision. If you are procuring the system from a third party supplier, it is important that you understand how the system works and can extract meaningful information to provide an appropriate explanation. It is also important that the third party can provide you with sufficient training and support, for example so that implementers can understand the model being used.

Next Steps

  • The complexity of the systems involved can make explaining AI decisions challenging. The guidance could prove a useful tool for organisations to re-assess how they approach explaining AI decisions. By differentiating types of explanations and providing a “checklist” of tasks for how organisations can structure the process of building and presenting their explanation of AI systems and decisions to individuals, the guidance offers a compliance framework for addressing this issue and demonstrating compliance.
Author

Ben advises clients in a wide range of industry sectors, focusing in particular on data protection compliance, including healthcare, financial services, adtech, video games, consumer and business-to-business organisations. Ben regularly assists clients with global data protection compliance projects and assessments as well as specific data protection challenges such as international transfers and data security breaches. Ben is also regularly involved in drafting and negotiating data protection clauses in agreements for various clients in a wide range of industry sectors. Ben also regularly advises clients on electronic direct marketing and cookies.

Author

Joanna advises on a wide range of technology and commercial agreements and matters. Her practice focuses on regulatory issues, especially data protection, consumer law, and advertising and marketing, and she regularly advises clients on these areas in particular.

Author

Cara is an associate in Baker McKenzie's Technology team, based in London. Her practice encompasses aspects of commercial, technology and intellectual property law. She has a particular focus on advising product counsel, regulatory and data protection issues.