The World Health Organization (WHO) has released a publication outlining key considerations for regulation of artificial intelligence for health. This follows the EMA’s Consultation on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, which is open for public consultation until 31 December 2023 (see our post here for more information).
This aims to promote dialogue among stakeholders, including developers, regulators, manufacturers, health workers and patients. The WHO focuses on six key regulatory considerations on AI for health, then lists 18 key recommendations (based on these 6 topics) for stakeholders to take into account as they continue to develop frameworks and best practices for the use of AI in healthcare.
Read our full breakdown of the recommendations and how to put them into practice here. The six key regulatory considerations laid out by the WHO are:
- 1. Documentation and transparency: it is important to maintain appropriate and effective documentation and record-keeping on the development and validation of AI systems including their intended medical purposes and development process. This is essential to establish trust and allow for the regulatory assessment and evaluation of AI systems (including tracing back the complex development process).
- 2. Risk management and AI systems development lifecycle approach: risks associated with AI systems, such as cybersecurity threats, vulnerabilities and biases should be considered throughout the total product lifecycle and addressed as part of a holistic risk management approach. Such holistic risk evaluation and management need to take into account the full context in which the AI system is intended to be used.
- 3. Intended use and analytical and clinical validation: Transparent documentation on the intended use of the AI system, on the training dataset composition underpinning an AI system, as well as on the external datasets and performance metrics should be available to demonstrate the safety and performance of the AI system.
- 4. AI related data quality: data is the most relevant asset to train AI systems. All AI solutions rely on data, and its quality will impact the systems’ safety and effectiveness. The development of the AI system must therefore be supported by data of sufficient quality to achieve the intended purpose. Data quality issues and challenges need to be identified and pre-release trials for AI systems need to be deployed to ensure they will not amplify biases and errors and create harm.
- 5. Privacy and data protection: are to be considered from the outset of the design, development and deployment of AI systems, taking into account that health-data qualify as sensitive personal data, which are generally subject to a higher degree of protection. Privacy and cybersecurity risks must be considered as part of the compliance program. A good understanding of the privacy and data protection legal framework is key to ensure compliance. Beyond privacy and data protection, ethical considerations are also to be taken into account.
- 6. Engagement and collaboration: engagement and collaboration among key stakeholders (developers, manufacturers, health-care practitioners, patients, patient advocates, policy-makers, regulatory bodies and others) should be fostered and facilitated in order to ensure that products and services stay compliant throughout the whole lifecycle.