This article was first published in Corporate Counsel Magazine. A New Era of Influence: A Swift Take on Branding, Computer-Generated Influencers and Deepfakes | Corporate Counsel (law.com)

A-list celebrities like Taylor Swift are not the only ones leveraging their influence to promote and sell products and services. Streamers, vloggers, and other creatives are using their platforms to influence today’s consumer trends. However, a steep rise in the use of GenAI and computer-generated influencers brings with it new marketing risks and considerations for celebrities, influencers, and businesses alike.

In no uncertain terms, 2023 was the year of pop icon and IP rockstar Taylor Swift. A household name for her music and sold-out concerts, Swift is known for leveraging intellectual property protections to grow and secure her brand. In 2023 alone, she secured the highest-grossing tour to date with a revenue of over $1 billion, a film that grossed over $250 million and recognition as “2023 Person of the Year” by Time Magazine. Whether it is registering “Swifties” with the U.S. Patent and Trademark Office or rolling out “Taylor’s Version” for her entire discography, Taylor Swift’s use of the law to protect her curated brand has played a key role in her sustained success.

A-list celebrities like Taylor Swift are not the only ones leveraging their influence to promote and sell products and services. Streamers, vloggers, and other creatives are using their platforms to influence today’s consumer trends. While many traditional methods of advertising still have staying power, there is no denying the new wave of influencers (including virtual influencers) in digital marketing. In 2023, brands spent an estimated $21 billion on creator marketing. In fact, the global influencer industry is expected to become a $69.92 billion dollar industry by 2029. Thus, there is no doubt that influencer marketing has become an integral part of many companies’ business plans.

However, a steep rise in the use of GenAI and computer-generated influencers brings with it new marketing risks and considerations for celebrities, influencers, and businesses alike.

Computer-Generated Influence

For many companies, influencers are a welcome alternative to product and services endorsement, due to their ability to build credibility and trust, broaden the company’s appeal, and reach their targeted audience. And with recent advancements in artificial intelligence (AI) and machine learning, celebrities, influencers and businesses have unique opportunities to increase efficiency. New tools allow users to rapidly generate high-quality text, graphics, images and videos that are realistic, personalized and engaging—all at a fraction of the cost through the use of digital illustration and 3D modeling.

Take, for example, the growing number of influencers who are using GenAI to create virtual avatars in content production. Computer-generated influencer (CGI) avatars, or “virtual influencers,” create a humanized experience for consumers, making digital marketing content more relatable and entertaining. Virtual influencers are particularly attractive to businesses because they are cost-effective, have better and more consistent message control, can serve as a curated “personality” to reflect the brand and reach a wider audience with instant access to analytics.

Companies can partner with virtual influencers—such as Lil Miquela, a robot who was listed as one of Time’s “25 Most Influential People on the Internet” in 2018 or Qai Qai, the popular baby influencer with over 339,000 followers on social media — or create their own computer-generated brand ambassadors to advertise products and services.

Tips for Digital Marketing With Virtual Influencers

  • Know your audience. Using GenAI, especially virtual influencers, to create advertising content has had a positive impact on some brands whether by growing their following and brand building, including making products and services more appealing to their targeted audience. Contradictorily, it has also had less positive results in increasing customer loyalty, especially if the content goes against consumers’ social responsibility expectations.

Recently, a Formula E team ended its use of an AI-generated influencer for its ambassador role due to immediate backlash from motorsport fans. Comments flooded the internet highlighting the team’s lack of awareness and sensitivity towards the issues of diversity and sustainability. Fans echoed the importance of the sport hiring real life women as ambassadors as opposed to computer-generated influencers since, historically, diversity in the sport has been minimal. This example highlights the importance of considering how your targeted audience may respond to your content and making sure your content is consistent with the brand and image you want to portray.

  • Obtain consent before using the likeness of others. Companies should determine how closely they intend to replicate the features of humans and refrain from using the likeness of others, including high profile individuals, influencers, and even characters, without a license. Failure to obtain consent may bring claims for violating a person’s right of publicity or infringement.
  • Other considerations. Other issues to explore when making the decisions to use GenAI in digital marketing (including the use of a virtual influencer) include: (1) sustainability (given the considerable energy used for machine learning and the environmental impact), (2) bias (these man-made machines may amplify human biases), and (3) diversity (e.g., the potential for it to help and or hurt diversity and inclusion efforts).

Deepfakes

Advances in GenAI have also fanned the flame of content known as deepfakes, which are increasingly used to cause harm to individuals and businesses alike. Deepfakes are digitally modified and forged images, audio or video that are created in the same way as other computer-generated content and produce hyper-realistic media.

Not even Taylor Swift or her “Swifties” can escape the ramifications of deepfakes. In January 2024, the internet was flooded with an advertisement that appeared to show Swift endorsing a giveaway for Le Creuset products on various social media platforms. Le Creuset promptly stated it was not involved with Swift for any consumer giveaway, but many consumers fell victim to the fake endorsement.

The proliferation of deepfakes has caught the attention of a number of regulators. The Federal Trade Commission (FTC) recently finalized the Rule on Impersonation of Government and Businesses, which aims to address the increased number of scammers who impersonate businesses or government agencies and allow for the FTC to seek monetary relief as a result of these deceptive practices.

States are also moving to protect against AI-related abuses, including the use of famous voices, like Taylor Swift’s. On March 21, 2024, the state of Tennessee passed the Ensuring Likeness Voice and Image Security Act of 2024 (ELVIS Act) which enhances the state’s right of publicity law that had explicit protections for one’s “name, photograph, or likeness” to include voice, and AI-specific protections against the unauthorized use of a person’s voice.

However, even as state and federal regulators advance protections against deepfakes, companies should consider implementing a comprehensive AI governance and use program to ensure proper risk mitigation strategies and protocols are in place.

Tips for Internal Policies and Procedures to Protect Against Deepfakes

  • Update terms of use. Companies should update any external terms of use for the company’s websites and other online platforms where content that could be used to create a deepfake may be posted. Your terms of use should expressly prohibit scraping activities and copying or modifying any images or videos posted on the site.
  • Watermark your content. Any content posted on publicly assessable sites, including any images or videos featuring high-level executives should incorporate the company’s watermark that cannot be removed. This will help the public identify authentic content and possibly distinguish them from any deepfakes. This will also help your business detect manipulation of any content.
  • Educate and train employees. Implement formal training sessions for your employees to explain what a deepfake is and how to identify them. Use examples of legitimate and fake content, and provide a step plan to help identify common signs of deepfakes (e.g., pixilation, blurring, cropped effects, inconsistencies and discoloration, unnatural movement, gaps in speech, etc.). This will raise generally awareness, and make employees wary and less susceptible to potential phishing or cybersecurity threats.
  • Implement appropriate security protocols. Revise internal security policies and implement identity verification protocols, including login credentials. Use anti-fake technology to help identify deepfakes, particularly in phishing email schemes. Develop a monitoring system to help identify deepfakes on the market (i.e., on social media), and incorporate the removal of such deepfakes in your company’s takedown policy.
  • Prepare a response strategy. Companies should have a specific incident response plan, as part of their overall security and incident response plan, in the event of a deepfake It is crucial to eliminate any deepfakes as soon as possible as the longer it is available, the more likely the public can be convinced and deceived. Work with your legal teams to prepare pre-approved communication templates in advance to address the media, employees or vendors. Companies should also update their internal policies (including cyber security) to provide guidance for employees on how to address a potential deepfake attack.

Key takeaways

So what does all of this mean for companies?

All businesses should be carefully evaluating how they are advertising their products and whether the use of a celebrity, social media or virtual influencer aligns with the brand and its messaging. Notably, the company should consider implementing a comprehensive AI governance and use program that includes risk mitigation strategies and protocols, given the increasing issues surrounding deepfakes.

Additionally, companies should consider developing a content and social media manual for all their external content creators. The manual should provide guidance with respect to the use of GenAI and its implications on privacy and intellectual property. The guidelines should be explicit and the resulting agreements with social media influencers should reflect robust provisions in the contract related to the use of GenAI for the brand’s content (e.g., disclaimer, liability, or indemnity language).

To the extent your company decides to involve a virtual influencer in its marketing campaigns, the company should assess whether there is a benefit to using a virtual influencer, noting the differences between traditional celebrities, social media influencers, and virtual influencers. GenAI is being used in a number of ways but there are ways to minimize and mitigate the potential risks that follow.

Cynthia Cole is an intellectual property partner in Baker McKenzie’s Palo Alto office, as well as a member of the IP Tech Global Steering Committee. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy and cybersecurity strategy.

Alysha Preston is a senior associate in the firm’s intellectual property practice group in New York. Alysha has expertise in intellectual property and technology transactions, including license and collaboration agreements, joint ventures, spin outs, commercial contracts and mergers and acquisitions. 

Inez Asante is an associate in the intellectual property technology practice group in New York, where she works primarily on IP transactions.

Author

Cynthia J. Cole is Chair of Baker McKenzie’s Global Commercial, Tech and Transactions Business Unit, a member of the Firm’s global Commercial, Data, IP and Trade (CDIT) practice group steering Committee and Co-chair of Baker Women California. A former CEO and General Counsel, just before joining the Firm, Cynthia was Deputy Department Chair of the Corporate Section in the California offices of Baker Botts where she built the technology transactions and data privacy practice. An intellectual property transactions attorney, Cynthia also has expertise in AI, digital transformation, data privacy, and cybersecurity strategy.

Author

Alysha Preston is an associate in the Firm’s Intellectual Property Practice Group. Alysha counsels domestic and multinational companies in navigating complex issues involving intellectual property and technology transactions, including in mergers and acquisitions. Alysha has been recognized by Super Lawyers and selected to the 2020 and 2021 New York Metro Rising Star list for Intellectual Property. Alysha also served as co-chair of the New York office’s Diversity and Inclusion Committee.

Author

Inez Asante is an associate in the Intellectual Property Technology Practice Group in Baker McKenzie's New York Office, where she works primarily on IP transactions.