On February 8, 2024, the Federal Communications Commission (FCC) unanimously adopted a declaratory ruling deeming telephone calls using AI-generated voices subject to Telephone Consumer Protection Act (TCPA) restrictions on calls containing an “artificial or prerecorded voice”. According to the ruling, § 227 of the TCPA, which prohibits the initiation of “any telephone call to any residential telephone line using an artificial or pre-recorded voice to deliver a message without the prior express consent of the called party,” applies to calls using AI voice generation technologies.

Background

On November 16, 2023, the FCC launched a Notice of Inquiry (NOI) to explore “defining AI in this context, the current state of AI use in calling and texting, the impact of emerging AI technologies on consumer privacy rights under the TCPA, and, if appropriate, the Commission’s next steps to address these issues.” The NOI sought comment from the public on the risks (and benefits) posed by AI voice generation technology—especially the use of these capabilities to clone or imitate a person’s voice to deceive listeners, defraud consumers, instill bias, or perpetuate crimes—and how the FCC should wield its TCPA authority to rein in such risks.

In a timely demonstration of the potential dangers around the technology used to imitate human voices, in the weeks before the FCC’s declaratory ruling, voters in New Hampshire received a call featuring the voice of Joe Biden discouraging them from participating in that state’s primary election. But the call wasn’t from the President—it was from a Texas telecommunications company. While the FCC is currently coordinating with the New Hampshire attorney general to investigate the robocalling campaign, the episode underscores why the FCC felt it was necessary to clarify the application of the TCPA to such conduct.

The Ruling

The declaratory ruling confirms the FCC’s interpretation of the TCPA’s restrictions on uses of “artificial or pre-recorded voice” as applying to AI technologies. Although much of the NOI was given over to the task of defining AI technologies, the ruling itself does not include an explicit definition of the specific AI technologies proscribed by the TCPA. Nonetheless, the ruling states that the § 227 restrictions apply to voice cloning, which emulates real or artificially created human voices. Likewise, the ruling also specifies that using AI technologies to communicate with consumers through pre-recorded messages also falls within the scope of § 227.

Further, the ruling cites to the FCC’s Soundboard Ruling, which confirmed that the presence of a live agent on a call selecting pre-recorded messages to be played doesn’t negate the application of § 227. Analogizing this rule to the AI context, it may be inferred that the current ruling applies to calls featuring voice cloning and similar technologies, even if a live operator determines the content of the call in real time.

Takeaways

The FCC joins a crowded group of federal, state and international regulators jockeying to regulate AI technologies. So far the EU has been in the front of the pack with the recent passage of the EU AI Act. One of the AI Act’s central features is the identification of certain prohibited use cases, categorically banning AI applications that carry an unacceptable risk. In some ways, the FCC declaratory ruling comes as close as any US regulator to proclaiming a prohibited use, to date.

But the ruling stops short of an outright ban, instead requiring callers to undertake certain steps when making calls using AI that mimics human voices or generates call content using a pre-recorded voice:

  • Callers must obtain prior express consent absent an emergency purpose or exemption.
  • Calls made by such means must identify the entity making the call and clearly state the telephone number of the business during or after the call.
  • For calls made under an exemption to the prior express consent requirement or those that introduce an advertisement or constitute telemarketing a key-press or voice-activated opt-out mechanism must be provided. If a person opts out, the number must automatically be added to the caller’s do-not-call list and the call must be immediately terminated.
Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Cynthia is an Intellectual Property Partner in Baker McKenzie's Palo Alto office. She advises clients across a wide range of industries including Technology, Media & Telecoms, Energy, Mining & Infrastructure, Healthcare & Life Sciences, and Industrials, Manufacturing & Transportation. Cynthia has deep experience in complex cross-border, IP, data-driven and digital transactions, creating bespoke agreements in novel technology fields.

Author

Brian provides advice on global data privacy, data protection, cybersecurity, digital media, direct marketing information management, and other legal and regulatory issues. He is Chair of Baker McKenzie's Global Data Privacy and Security group.

Author

Alex advises clients on issues involving data privacy, digital transformation, IP, and cutting-edge technologies such as artificial intelligence. He represents clients in drafting agreements for data, IP, and technology transactions.