In brief
On Thursday, November 14, 2024, the U.S. Department of Homeland Security (“DHS“) announced its groundbreaking “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework“). The Framework is a guide for deploying AI safely and securely in all sixteen sectors of U.S. critical infrastructure, including communications, critical manufacturing, energy, financial services, healthcare, and information technology. It emphasizes the importance of risk-based mitigations to reduce potential harms to critical infrastructure and highlights the need for transparency and information sharing among stakeholders. The Framework builds on existing voluntary AI risk frameworks, including National Institute of Standards and Technology publications, and tailors its recommendations to enable entities at each layer of the AI supply chain to better ensure that AI is deployed safely and securely in U.S. critical infrastructure. Aspects of the Framework aim to protect critical infrastructure in ways similar to the Cyber Incident Reporting Critical Infrastructure Act (“CIRCIA“) — see our CIRCIA fact sheet for more details on that law. The recommendations set out by the Framework are the result of considerable deliberation among government, private industry and think tank stakeholders. While not binding, the voluntary Frameworkâs recommendations are likely to be influential in shaping industry standards.
Background
On October 30, 2023, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO 14110“) which tasked DHS to establish the AI Safety and Security Board (“Board“) to provide “advice, information, or recommendations” to the Secretary of Homeland Security (“Secretary“) and the critical infrastructure community regarding its use of AI. The Board includes representatives from top AI companies, major computing and semiconductor firms, government, and civil societies.
Building upon the Administration’s Blueprint for an AI Bill of Rights (“AI Bill of Rights“) and the National Security Memorandum on Critical Infrastructure Security and Resilience (“NSM 22“), DHSâ Cybersecurity and Infrastructure Security Agency identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: 1) attacks using AI; 2) attacks targeting AI systems; and 3) design and implementation failures. DHS and the Board then mapped each of the categories of risks to critical infrastructure in order of risk management prioritization efforts based on the scale and severity of potential harms:
- Asset-Level Risk: The lowest scale of impact risk level includes disruption or physical damage to the operations, assets, direct supplier, or systems of critical infrastructure stemming from high-risk use of AI. This results from design or deployment flaws that impair service to a local population.
- Sector Risk: This level of risk involves a set of assets impacting sector operations beyond an individual asset. This level of risk is commonly comprised of: 1) operational failures of AI systems deployed in energy or water utilities; 2) interruptions in the provision of vital services such as those in medical or financial sectors; and 3) the targeting of the election infrastructure enabled by AI misuse.
- Systemic and Cross-Sector Risk: The second highest risk level include risks due to the increasingly interdependent nature of critical infrastructure such as: 1) disruptions to the information technology sector from AI-enabled cyberattacks and incidents that restrict access to critical services; 2) negative environmental impacts associated with growing AI adoption; 3) AI incidents resulting in significant financial loss; 4) errors in AI-enabled sensing technologies that substantially hinder access to the services of critical infrastructure entities; and 5) disruptions in logistics supply chains due to failures in AI-enhanced processes.
- Nationally Significant Risk: The highest risk level includes risks of AI causing widespread impact to safety or rights or significantly assisting with the development, manufacture, or deployment of conventional, chemical, biological, radiological, or nuclear weapons.
The Framework
The main goal of the Framework is to ensure the safe, secure, and ethical deployment of AI technologies in sectors that are vital to national security and public welfare. In providing the Framework, DHS aims to reduce the likelihood and severity of consequences within each of the risk categories listed above by recommending how key stakeholders can secure environments, drive responsible model design, implement data governance, ensure safe and secure deployment, and monitor performance and impact. The detailed guidelines are directed at five groups: 1) cloud and compute infrastructure providers; 2) AI developers; 3) critical infrastructure owners and operators; 4) civil society; and 5) the public sector. The Framework defines “AI Developers” as “entities that develop, train, and/or enable access to AI models or applications, including through their own or third-party platform services and tools” which is broadly consistent with the European Union’s AI Act’s and the Colorado Artificial Intelligence Act’s definitions. The table below summarizes the shared and distinct roles and responsibilities set out by the Framework:
Secure Environments | Drive Responsible Model and System Design | Implement Data Governance | Ensure Safe and Secure Deployment | Monitor Performance and Impact | |
Cloud and Compute Infrastructure Providers | – Vet hardware and software suppliers – Institute best practices for access management – Manage physical security | – Report vulnerabilities | – Keep data confidential – Ensure data availability | – Conduct systems testing | – Monitor for anomalous activity – Prepare for incidents – Establish clear pathways to report harmful activities |
AI Developers | – Mange access to models and data – Prepare incident response plans | – Incorporate Secure by Design Principles – Evaluate alignment with human-centric values | – Respect individual choice and privacy – Promote data and output quality | – Use a risk-based approach when managing access to models – Distinguish AI-generated content – Validate AI system use – Provide meaningful transparency to customers and the public – Evaluate real-world risks and possible outcomes – Maintain processes for vulnerability reporting and mitigation | – Monitor AI models for unusual or adversarial activity – Identify, communicate, and address risks – Support independent assessments |
Critical Infrastructure Owners and Operators | – Secure existing IT infrastructure | – Use responsible procurement guidelines – Evaluate AI use cases and associated risks – Implement safety mechanisms – Establish appropriate human oversight | – Protect customer data used to configure or fine-tune models – Manage data collection and use | – Maintain cyber hygiene – Provide transparency and consumer rights – Build a culture of safety, security, and accountability for AI – Train the workforce | – Account for AI in incident response plans – Track and share performance data – Conduct periodic and incident-related testing, evaluation, validation, and verification (“TEVV“) – Measure impact Endure system redundancy |
Civil Society | – Actively engage in developing and communicating standards, best practices, and metrics alongside government and industry – Educate policymakers and the public Inform guiding values for AI systems development and deployment – Support the use of privacy-enhancing technologies – Consider critical infrastructure use cases for red-teaming standards – Continue to drive and support research and innovation | ||||
Public Sector | – Deliver essential services and emergency response – Drive global AI norms – Responsibly leverage AI to improve the functioning of critical infrastructure – Advance standards of practice through law and regulations – Engage community leaders – Enable foundational research into AI safety and security – Support critical infrastructure’s safe and secure adoption of AI – Develop oversight |
What’s next
Secretary Alejandro Mayorkasâ Framework cover letter sets out an intent to further engage with critical infrastructure partners to understand how the Framework can be adapted for sector-specific needs, and to convene dialogues with international partners on how to harmonize the approach to AI safety and security across critical infrastructure globally.
Secretary Mayorkas expects Board members to implement the Frameworkâs guidelines within their organizations, which he hopes will âcatalyze other organizations in their respective spheres and across the ecosystem, to adopt and implement the guidelines as well, and to have this take hold and to become the framework that will assist in driving harmonization”. The outlook seems less certain within the Federal government. The Framework and the Board were developed as a result of President Bidenâs EO 14110, and President-elect Trump has indicated he may repeal EO 14110. It remains to be seen whether the Board will persist under the next Administration, but the Framework itself is more likely to endure given it was developed with significant input from industry. Baker McKenzie has developed a report on the anticipated business impact the new Administration will have in several areas including technology.
Companies, especially those in critical infrastructure, are encouraged to review and map the recommendations set out in the Framework to their existing AI risk management processes, and consider implementing any new or missing elements. If you have any questions regarding compliance or tailoring your company’s AI, privacy, or cybersecurity governance program, please contact your Baker McKenzie attorney or the authors below.