On September 29, 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, which would have enacted the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Act”) to create a comprehensive regulatory framework for the development of artificial intelligence models. The veto embodies the dilemma that has emerged around the regulation of AI applications: how can laws prevent harms in the use and development of AI, while promoting innovation and harnessing the power of new technologies to affect positive change?
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act at a Glance
Although the Act sought to follow the EU AI Act in establishing a comprehensive regulatory framework, it eschewed the AI Act’s risk-based approach to AI regulation. Instead, the Act would have applied to “covered models,” which are defined as AI models that exceed thresholds in the computing power and costs associated with their training.
The Act would have imposed certain obligations and restrictions on covered models, including:
- The implementation of administrative, technical, and physical cybersecurity measures to protect against unauthorized access, misuse, or post-training modifications
- The implementation of full-shutdown capabilities
- A written safety and security protocol, along with the designation of senior personnel to ensure implementations of the protocol as written, the retention of the protocol for as long as the model is made available (plus five years), and the publication and filing of the protocol with the California attorney general
- Assessment of whether a covered model is capable of causing or enabling material harm
- Not making covered model available for commercial or public use if there is a risk that the model will create or enable a critical harm
- Undertaking and retaining annual third-party audits of covered models
- The submission of a statement of compliance to California’s attorney general
- Reporting safety incidents to the attorney general
The Act would have provided for attorney general enforcement, with monetary penalties and injunctive relief available for violations.
Although Senate Bill 1047 will not go into law, there are federal legislative efforts underway that will regulate AI developers and cloud providers. For example, U.S. Department of Commerce’s Bureau of Industry and Security recently released a Notice of Proposed Rulemaking that will impose cyber reporting obligations on frontier AI developers and compute providers.
In Context
The veto of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is the culmination of a busy legislative period for AI regulation, with Governor Newsom signing eighteen new laws relating to AI in the preceding 30 days. The new California legislation runs the gamut of subject areas, from laws requiring the disclosure of AI tools used in political advertisements to the establishment of a commission to consider the inclusion of AI literacy in California schools.
While this frenzy of legislative activity suggests a willingness to regulate AI, the refusal to enact a comprehensive regulatory framework in the style of the European Union’s AI Act or Colorado’s recent AI law is significant. The Act attracted opposition from California Representatives Nancy Pelosi and Zoe Lofgren, AI thought leaders like Professor Fei-Fei Li, as well as California organizations leading AI development. The criticism, many of which were reflected in Governor Newsom’s veto message, noted the impact of the law on innovation, its failure to adopt a risk-based approach like the EU AI Act, and potential harms to the development and availability of open source AI models.
Despite the veto, Governor Newsom reiterated his commitment “to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation.” Organizations that develop and use AI should continue to monitor legislative developments as statehouses consider both comprehensive and use-specific proposals to regulate AI.