On April 29, 2024, the Department of Commerce’s National Institute of Standards and Technology (NIST) released initial drafts of four significant policy and governance documents aimed at improving the safety and reliability of AI systems. The launch came on the 180th day following President Biden’s Executive Order 14110 on the Safe, Secure and Trustworthy Development of AI, which instructed NIST to establish guidelines and best practices to promote consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems.

AI RMF Generative AI Profile

The first draft document, the Generative AI Profile (NIST AI 600-1), is intended as a companion resource to NIST’s AI Risk Management Framework, which NIST released last year and which NIST developed to help organizations manage risks associated with AI. The Generative AI Profile outlines twelve risk areas associated with generative AI:

  1. CBRN information: Increased availability of information regarding chemical, biological, radiological, or nuclear (CBRN) weapons
  2. Confabulation: The generation of false information by an AI system, popularly referred to as “hallucination”
  3. Violent recommendations: AI systems mayproduce outputs that incite or glorify violence
  4. Data privacy: Generative AI may leak, generate or infer personal information about individuals
  5. Environmental: The training, maintenance and deployment of generative AI can consume environmental resources and create significant carbon emissions
  6. Human-AI configuration: Misconfiguration of an AI system can result in the system not working as intended
  7. Information integrity: Generative AI can enable the production of false information capable of deceiving people or causing harm
  8. Information security: Techniques like prompt injection can use generative AI to exploit vulnerabilities in interconnected systems
  9. Intellectual property: Generative AI may infringe intellectual property rights
  10. Obscene, degrading, and/or abusive content: Generative AI can facilitate the production of obscene content, including without the knowledge or permission of the individuals supposedly depicted by such content
  11. Toxicity, bias, and homogenization: The exposure of AI systems to toxic or biased materials can lead to “representation harms”
  12. Value chain and component integration: Third party components (e.g., datasets, pre-trained models) involved in developing a generative AI system may not have been properly obtained or vetted

The Generative AI Profile also suggests various risk mitigation steps, which are both cross-referenced to subcategories in the AI Risk Management Framework and which are intended to address specific risks enumerated in the Profile.

Secure Software Development Practices for Generative AI and Dual-Use Foundation Models

The second draft document, the Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (SP 800-218), supplements the Secure Software Development Framework (SSDF), NIST’s software security framework. The Secure Software Development Practices is aimed primarily towards those involved in the development and training of AI models and specifically concerns best practices for sourcing reliable and secure training data for AI models.

As with the Generative AI Profile, the Secure Software Development Practices, includes a list of specific actions organizations should take to mitigate risks and these actions are also mapped onto AI Risk Management Framework subcategories. The NIST announcement specifically highlights that the new guidance prompts developers to scrutinize training data for “signs of poisoning, bias, homogeneity and tampering.”

Reducing Risks Posed by Synthetic Content

The third draft document, Reducing Risks Posed by Synthetic Content (NIST AI 100-4), focuses on strategies for detecting, authenticating and labelling synthetic content. The guide notes that harm can arise from AI-generated synthetic content when it “supports misinformation and disinformation, synthetic [child sexual abuse material] and [non-consensual intimate imagery], and fraud and financial schemes.”

The document breaks down digital content transparency mechanisms into three overall strategies: (1) provenance data tracking (which includes metadata recording and digital watermarking); (2) synthetic content detection (including provenance detection, automated content-based detection, and human-assisted detection); and (3) normative, education, regulatory and market-based approaches (the last of which is outside the scope of the document). The document weighs the benefits and likely efficacy of these methods, with potential risks, such as those to privacy, data integrity, and security. In its conclusion the document states plainly that “[e]ach of the approaches described in this report holds the promise of helping to improve trust…Yet each has important limitations that are both technical and social in nature.”

Global Engagement on AI Standards

The final of the draft documents, entitled Global Engagement on AI Standards (NIST AI 100-5), “establishes a plan for global engagement on promoting and developing AI.” Specifically, this document champions the development of industry standards to foster AI that is safe, reliable, and interoperable.

NIST GenAI

In addition to the draft documents, NIST announced the launch of its NIST GenAI program to evaluate AI technologies as they evolve. NIST GenAI’s pilot initiative, which opened to registration in May 2024, will seek to assess how human-created content differs from the products of generative AI. The pilot will enroll generator teams, whose aim will be to deploy AI systems capable of generating content that is indistinguishable from human content, and discriminator teams, whose systems will be tasked with detecting AI content produced by the generator teams.

Next Steps

The newly published documents are initial public drafts, which have yet to be finalized. Members of the public have until June 2, 2024, to submit comments on any of the draft documents.

Once finalized, these documents, like the AI Risk Management Framework, are likely to be nonmandatory guidance to assist organizations in building and deploying safe and reliable AI rather than affirmative requirements. Nevertheless, adherence to established frameworks like NIST’s AI Risk Management Framework may become an important benchmark and have a legal impact (e.g., forming the basis for an affirmative defense of an enforcement action under Colorado’s AI law). Organizations contracting for AI tools will be expected to have risk management programs designed around established frameworks such as the NIST frameworks — and lawmakers and courts will also consider compliance with standards like the AI Risk Management Framework in determining whether a company’s approach to AI risk is sufficient.

Author

Brian provides advice on global data privacy, data protection, cybersecurity, digital media, direct marketing information management, and other legal and regulatory issues. He is Chair of Baker McKenzie's Global Data Privacy and Security group.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cynthia is an Intellectual Property Partner in Baker McKenzie's Palo Alto office. She advises clients across a wide range of industries including Technology, Media & Telecoms, Energy, Mining & Infrastructure, Healthcare & Life Sciences, and Industrials, Manufacturing & Transportation. Cynthia has deep experience in complex cross-border, IP, data-driven and digital transactions, creating bespoke agreements in novel technology fields.