If passed, SB-1047, the California Safe and Secure Innovation for Frontier Artificial Intelligence Model Act, would introduce product safety, documentation and reporting obligations on developers of AI systems. Currently awaiting passage in the state Assembly, the bill would be a landmark regulation for the burgeoning AI industry.
The law as currently written would mainly target larger AI projects developed by companies with extensive resources, rather than smaller startups. However, operators of data centers would also be required to assess whether their customers are planning to leverage their services for purposes of developing covered AI models, keep records of these customers, and shut down their services if required under the act.
The bill would create a Board of Frontier Models and a “Frontier Model Division” within the California Government Operations Agency to review the certifications of covered developers, who would be required to perform product safety assessments and annually reevaluate their compliance with the act by retaining a third party to audit their compliance.
What AI systems are covered?
The models covered under the law would vary over time subject to rules to be developed by the Frontier Model Division. Initially, the act would only apply to AI models that require a minimum investment of financial or computational resources in their training. Specifically, “covered models” would mean either of the following:
- An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.
- An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations.
Who must comply?
Developers of covered AI models and persons operating “computing clusters” (data centers with a minimum level of computing capacity defined under the act) would be required to comply.
“Developer” means a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power, or by fine-tuning an existing covered model.
How to comply?
Developers’ product safety obligations. Developers would be required to, before they initially train a covered model:
- Implement particular technical and organization measures and measures focused on preventing AI enabled harms;
- Implement capability to promptly enact a full shutdown of training activities, use of the model, and all derivatives of the model;
- Implement and regularly review a written and separate security protocol satisfying prescriptive elements and which must be provided to a new agency: the Frontier Model Division; and
- Designate senior personnel responsible for ensuring compliance.
Before developers use a covered model or covered model derivative, they would be required to:
- Perform product safety assessments;
- Implement safeguards; and
- Ensure the model’s actions can be attributed to them.
Developers would also have ongoing duties to:
- Not use or make available a model if it poses “unreasonable risk” of “critical harms,” defined as 1) the creation or use of chemical, biological, radiological, or nuclear weapons, 2) cyber-attacks on critical infrastructure resulting in mass casualties or at least $500,000,000 of damages, or 3) otherwise engaging in conduct with limited human oversight that results in $500,000,000 of damages;
- Annually reevaluate compliance with the act including by retaining a third-party auditor to audit its compliance and have an audit report prepared;
- Submit a certification of compliance no more than 30 days after making a model available for the first time and then annually submit a compliance certification to the Frontier Model Division;
- Report each AI safety incident affecting the covered model or any covered model derivatives within 72 hours; and
- Provide a publicly available price schedule for the purchase of or access to the model.
Whistleblowing reporting. Developers and their contractors and subcontractors would be prohibited from preventing and retaliating against whistleblowers. Developers and their contractor and subcontractors would be required to provide clear notice to all employees working on covered models of their rights and responsibilities under the act and have an internal channel for anonymous reporting of any violation of the act or other law.
Operators’ KYC and shutdown obligations. Persons that operate a computing cluster would be required to:
- Implement written policies and procedures to perform due diligence on its prospective customers and keep a record of its due diligence actions for 7 years and provide such records to the Frontier Model Division or the Attorney General upon request; and
- Implement the capability to promptly enact a full shutdown of any resources used to train or operate models under their customers’ control.
Sanctions and Remedies
The California Attorney General and the California Labor Commissioner may bring civil actions against, and are entitled to recover civil penalties from, developers’ and persons operating a computing cluster for their failures to comply with the act. Injunctive and declaratory relief, monetary damages, punitive damages and attorneys fees are also available.
Baker McKenzie’s recognized leaders in AI are supporting multinational companies with strategic guidance for responsible and compliant AI development and deployment. Our industry experts with backgrounds in data privacy, intellectual property, cybersecurity, trade compliance, and employment can meet you at any stage of your AI journey to unpack the latest trends in legislative and regulatory proposals and the corresponding legal risks and considerations for your organization. Please contact a member of our team for more information.