Organizations are increasingly relying upon and integrating artificial intelligence (AI) technologies, including automated decision-making (ADM) processes, into their workflows and utilizing these applications for critical decision-making processes.[1] AI is being leveraged for a variety of use cases (e.g., IT process automation, marketing and customer care, security and threat detection, recruitment and employment, etc.) and across different industries including technology and communications, financial services, manufacturing, and consumer services. Any AI system is driven and sustained by data. For this reason, global AI regulatory proposals provide that organizations must implement effective information governance (IG) measures including data protection, data security, and recordkeeping. In particular, recent AI regulatory trends require organizations to properly classify and maintain appropriate documentation in relation to “high-risk” and “high-impact” AI systems, where such systems pose a risk of harm related to health and safety or an adverse impact on the fundamental rights of an individual such as protection from discriminatory or biased employment and consumer practices.
In an earlier article, “Information Governance in AI Development,” we examined the ways in which IG measures can ensure the availability of accessible, trusted, secure and reliable data through the use of AI technologies. This article further examines recent AI regulatory trends which require organizations to develop and maintain sufficient audit trails and records throughout the lifecycle of systems, assess the potential risk and impact of the AI systems, and establish interdisciplinary teams including IG and IT specialists to determine the manner in which these records will be utilized for accountability and transparency purposes.
Intersection of AI and IG
Organizations that design, develop, use, and/or deploy AI systems should implement IG practices that are proportionate to the potential impact of the specific AI system (e.g., stricter measures for high impact AI systems) which include maintaining appropriate records of data and modelling methodologies to ensure reproducibility and traceability. These types of records include (i) reasons for using AI, (ii) data collection, preparation, and post-processing, (iii) integration into IT infrastructure and technical choices, (iv) personnel involved in the design and implementation of the AI model, (v) system performance and security, (vi) monitoring and assessment, and (vii) systems training data and codes.[2]
In many jurisdictions, the status of AI regulation is still in development and many AI system management practices are provided through industry standards, regulatory guidance, government consultations, and/or legislative proposals. If not imminent, given the rapid global increase in AI use, the implementation of AI regulatory measures are definitely on the horizon. Even in the absence of specific AI laws or regulations, organizations should continue to leverage effective IG practices (e.g., use of records retention schedules, retention/destruction policies, privacy policies, legal hold processes, etc.). that will assist them in complying with future mandatory AI requirements. Furthermore, where personal information is collected, used and disclosed in AI systems, organizations must already ensure they operate in compliance with applicable privacy laws (e.g., EU GDPR) and human rights and anti-discrimination laws (e.g., restrictions in using ADM for recruitment or employee monitoring).
AI Regulatory Proposals and Trends
Outlined below are recent global developments in AI regulation including personal data protection and recordkeeping requirements:
Australia – There are currently no AI and ADM specific laws or regulations in Australia. To address this gap, over the past few years the Australian government has been engaging in AI regulatory consultation and review processes. The Artificial Intelligence Ethics Framework (2019) serves as a guide for organizations to responsibly design, develop, and implement AI by applying specific principles including privacy protection and security; this includes ensuring proper data governance and management for all data used and generated by the AI system throughout its lifecycle. In March 2022, the Department of Industry, Science and Resources launched a consultation on AI and automated decision making regulations and released an issues paper, which discusses the regular use of personal information in AI and ADM systems as well as the impact of ADM systems on privacy, including inference and decision-making based on personal information that may be inaccurate.
Canada – In June 2022, Bill C-27, the Digital Charter Implementation Act, was introduced to modernize Canada’s privacy regime. This proposed legislation introduces a new legal framework, the Artificial Intelligence and Data Act (“AIDA”), which seeks to regulate international and inter-provincial trade and commerce in AI systems by requiring certain entities to adopt risk mitigation measures to address harm and biased output of high-impact AI systems. The AIDA prohibits the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for the use of AI systems if its use causes serious harm to individuals. In using high-impact AI systems, organizations would be required to carry out assessments, establish risk mitigation measures, monitor the mitigation measures, and maintain related records.
Canada’s Bill C-27 also introduces the Consumer Privacy Protection Act (“CPPA”), which seeks to protect personal information that is collected, used or disclosed in the course of commercial activities. The CPPA would require that organizations that use an ADM system to make a decision about an individual provide the individual, on request, with an explanation of the decision. The explanation would need to indicate the type of personal information used to make the decision, the source of the information, and the reasons that led to the decision made. Organizations that have personal information that is the subject of a request would be required to retain the information for as long as necessary to allow individuals an opportunity to exhaust any recourse that they may have under the CPPA.
European Union – The proposed Artificial Intelligence Act provides that high-risk AI systems must be designed to enable the automatic recording of events (e.g., logs) while the system is operational. Providers of high-risk AI systems would need to develop a quality management system to ensure compliance and establish a post-market monitoring system to monitor the performance and compliance of high-risk AI systems. Providers would also be required to maintain the following records for a period of 10 years after the AI system was placed on the market or put into service: (i) technical documentation; (ii) quality management system documentation; (iii) documentation regarding changes approved by notified bodies (where applicable); (iv) decisions and documents issued by notified bodies (where applicable); and (v) an EU declaration of conformity.
Hong Kong – In August 2021, the Office of the Privacy Commissioner for Personal Data released Guidance on the Ethical Development and Use of Artificial Intelligence, which applies to the development or use of AI systems that involve the use of personal data or the identification, assessment or monitoring of individuals. The guidance provides that AI systems should be continuously monitored and reviewed since the risk factors related to the application of AI systems (e.g., relevance of training data and the reliability of AI models) may evolve and change over time. Organizations should maintain proper documentation of the risk assessments, design, development, testing, and use of AI systems. Organizations should also maintain proper documentation of the handling of data to ensure that the quality and security of data are maintained over time, as well as ensuring compliance with the Personal Data Protection Ordinance. This documentation includes: (i) sources of the data; (ii) the allowable uses of the data; (iii) how the data used was selected from the pool of available data; (iv) how data was collected, curated, and transferred within the organization; (v) where the data is stored; and (vi) how data quality is maintained over time.
Japan – In January 2022, the Ministry of Economy, Trade and Industry released an updated version of the Governance Guidelines for Implementation of AI Principles, which sets out high level action targets that should be implemented by organizations involved in AI business including the development/operation of AI systems in their business transactions. These guidelines provide that AI organizations must ensure they maintain adequate records that explain the implementation status of AI management systems, records of gap analysis carried out in all individual AI system development projects, create an implementation outline when AI-related training is provided, maintain minutes of internal meetings and meetings with other companies regarding AI system development and operation, and make sure that relevant persons other than those engaged in these tasks are able to access the abovementioned records.
United States – Introduced in February 2022, the proposed Algorithmic Accountability Act of 2022 would require organizations to conduct impact assessments of ADM systems and augmented critical decision processes. Organizations would be required to maintain documentation of any impact assessment performed including applicable information from critical decision-making processes for 3 years longer than the duration of time for which the automated decision system or augmented critical decision process is deployed. In October 2022, the White House Office of Science and Technology Policy released a Blueprint For An AI Bill of Rights (“Blueprint”) to guide the design, use and deployment of automated systems in relation to AI. This Blueprint provides that designers, developers, and deployers of automated systems would be required to obtain required permissions and to respect decisions regarding the collection, use, access, transfer, and deletion of an individual’s data in appropriate ways, and consider alternative “privacy by design” safeguards.
IG Best Practices for AI Systems
- Conduct systems assessment – Determine the risk impact of the AI system and classify it appropriately to determine the level of IG measures that should be employed including data protection, security and access measures.
- Understand the data – Determine which type of data is being collected, used, and stored in the AI system including any personal and sensitive data.
- Follow the data – For any AI system, ensure that appropriate logging and audit trail systems are in place throughout the lifecycle of the system. Engage in the routine monitoring of AI system data.
- Maintain adequate records – Determine the scope and content of appropriate records including the documentation of any actions and events carried out through AI algorithms. Document and retain any decisions, and the reasons for such decisions, in relation to the AI system for the lifecycle of the system.
- Review IG practices – Review and update as necessary enterprise retention schedules and related destruction and privacy policies to account for AI system records. Ensure that any personal data and sensitive data (e.g., biometrics) are not retained for longer than needed to meet the purpose for which the data was collected and/or processed.
[1] The IBM Global AI Adoption Index 2022, provides that the global AI adoption rate is now at 35%, with an additional 42% of organizations exploring possibilities for integrating AI solutions.
[2] European Insurance and Occupational Pension Authority, Artificial Intelligence Governance Principles: Towards Ethical And Trustworthy Artificial Intelligence in the European Insurance Sector (2021), page 59. Regardless of the industry/sector, certain data governance and recordkeeping practices provided for in this guidance are relevant for any organization’s use of AI systems.