In brief The US Artificial Intelligence Safety Institute (“AISI”), housed within the National Institute of Standards and Technology (“NIST”), announced on 20 November 2024 the release of its first synthetic content guidance report, NIST AI 100-4 Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency (“NIST AI 100-4”). “Synthetic content” is defined in President Biden’s Executive Order on Safe, Secure, and Trustworthy AI (“EO 14110”) as “information, such as…
In brief The Canadian Competition Bureau (the “Bureau“) continues to focus on the intersection between artificial intelligence (“AI“) and competition law. Earlier this fall, the Bureau hosted the Competition Summit 2024: Market Dynamics in the AI Era, which provided insight into how AI will impact Canadian competition law policy and enforcement, and has released a report summarizing key take-aways. Through investments in its own capabilities and collaboration with domestic and international partners, the Bureau also remains…
Deepfakes–especially those generated by AI–are now a daily part of our lives. Some are harmless entertainment, and some are incredibly harmful to individuals and society. And as AI technology improves, deepfakes are getting more and more difficult to spot. Individual states are racing to stop the harmful deepfakes, but they are struggling to keep up with the technology. In this session, Baker McKenzie’s Cynthia Cole, global chair of our commercial, technology, and transactions team, and…
In brief On Thursday, November 14, 2024, the U.S. Department of Homeland Security (“DHS”) announced its groundbreaking “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework”). The Framework is a guide for deploying AI safely and securely in all sixteen sectors of U.S. critical infrastructure, including communications, critical manufacturing, energy, financial services, healthcare, and information technology. It emphasizes the importance of risk-based mitigations to reduce potential harms to critical infrastructure and highlights the…
On Friday, November 8, 2024, the California Privacy Protection Agency board voted 4-1 to commence the formal rulemaking process for the draft regulations on Automated Decisionmaking Technology (ADMT), Risk Assessments, Cybersecurity Audits, and Insurance Companies. The formal rulemaking process will begin with a 45-day public comment period. During this time, CPPA staff will gather and analyze public comments, which will inform potential amendments and revisions to the regulations. The period will likely be extended to…
The Federal Trade Commission (FTC) recently announced Operation AI Comply, an enforcement sweep targeting a diverse swathe of organizations that offer deceptive artificial intelligence (AI) products or use AI products in ways that harm consumers. The campaign demonstrates that the FTC is following through on past promises to crack down on deceptive AI and offers insight into the types of conduct that may attract the agency’s scrutiny. The announcement observes: “The cases included in this…
By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are…
In brief In September 2024, Texas’ Attorney General announced a “first-of-its-kind” settlement with a healthcare generative artificial intelligence (“Gen AI”) company over what it said were “false, misleading, or deceptive” Gen AI products that aid physicians and medical staff in drafting clinical notes and charts. Per the Attorney General, the Company’s advertised hallucination rate was “very likely inaccurate” which “may have deceived hospitals about the accuracy and safety of the Company’s products.” The settlement provides…
On September 29, 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, which would have enacted the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Act”) to create a comprehensive regulatory framework for the development of artificial intelligence models. The veto embodies the dilemma that has emerged around the regulation of AI applications: how can laws prevent harms in the use and development of AI, while promoting innovation and harnessing the power…
In brief On September 17, 2024, California Governor Newsom signed a pair of bills into law that seek to address the use of AI-generated digital replicas of performers in the state’s world-leading entertainment industry. These new laws will enhance protections for performers’ rights in digital reproductions of their likenesses and may require organizations that create, use, or contract for digital replicas to implement new measures to ensure compliance with the new legislation. Discussion The first…