Copyright 2024 Bloomberg Industry Group, Inc. (800-372-1033) AI: Real Money, Light Representations & Light Deal Certainties. Reproduced with permission.

Editor’s note: For additional guidance on practice-specific areas of risk associated with the use of generative and other forms of AI, see our AI Legal Issues Toolkit. For additional information on laws, regulations, guidance, and other legal developments related to AI, visit In Focus: Artificial Intelligence (AI)Data Security, Professional Perspective – Regulation of AI Foundation Models.

The wave of artificial intelligence (AI) based commercial activity is no longer a prediction. AI is here to stay, and companies and organizations are investing in it. According to CB Insights, in 2023, AI startups raised $42.5 billion across 2,500 equity rounds. We expect this spending trend to continue and the AI market to democratize away from the biggest industry players into a larger number of competitors, including by companies using corporate venture capital to leverage cash or secure more favorable access to AI technology.

We also expect some businesses to recharacterize themselves as “AI”, the way non-tech businesses have recharacterized themselves as tech, to capture above-average EBITDA multiples. All this is to say, the era of AI deals is here to stay.

Technology deals are not new. What is new is the speed of adoption and evolution of AI, which is creating use cases that add risks and requiring lawyers to think creatively about how to address these risks in deals. This is true even for the majority of deals which relate to acquisition of AI tools or use cases rather than the large language models themselves. The most adept practitioners address AI at the margin, leveraging existing tools for the basic heavy lifting and then innovating to address AI-specific risks that require additional mitigation.

We focus specifically on these new innovations in this article.

Managing AI Risk Through Representations & Warranties

From a distance, AI can resemble traditional software. Some buyers may assume the risks associated with a target’s use of AI and software are the same. In some cases, this assumption may lead a buyer to underestimate the broad range of intellectual property, data privacy, product liability and other risks associated with AI use.

A buyer can use representations and warranties to obtain disclosure of known AI-related issues and allocate the risk of unknown issues between the parties. See Sample Clause – AI in Transactions (Representations & Warranties). This can be done, in part, through “customary” representations, such as those relating to ownership of IP, non-infringement of third parties’ IP and compliance with laws. However, where AI is significant to the target’s business, a buyer should seek AI-specific representations to thoroughly manage the risks associated with the target’s AI use. See Transactional Guide to AI

Defining AI Concepts

The proliferation of large language models (LLMs) and generative AI applications has brought basic AI terminology into common vernacular. When addressing AI in legal documents, however, it is important to address key aspects of AI with precision—input, output, and the foundational model and application. The definitions below are used throughout this article and can be starting points for defining these concepts in purchase agreements (subject to transaction-specific customization).

  • AI Tools. Any deep learning, machine learning, or other AI technologies, including:

○ Algorithms, heuristics, models (including large language models), software or other systems that make use of or employ expert systems, natural language processing, computer vision, automated speech recognition, automated planning and scheduling, neural networks, statistical learning algorithms, reinforcement learning, or other machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; and

○ Implementations of the foregoing, whether in source code, object code, human readable form or other form, related IT systems, and all documentation related to the foregoing.

  • AI Inputs. Any data, information, content or materials used to develop, train, test, fine-tune, improve, deploy, implement, reinforce or otherwise support AI Tools.
  • AI Outputs. Any services, products, writings, graphics, recordings, electronic or other information, content, data or materials generated or derived from AI Tools or AI Inputs.

See Glossary – Artificial Intelligence (AI) Terms for Lawyers

Intellectual Property

A target’s use of AI Tools raises questions related to IP ownership and infringement. While ownership and infringement questions are familiar, there are AI-specific complications that may not be squarely addressed through traditional representations. If a target uses an AI Tool developed by a third party, a buyer and/or investor will need to ensure that applicable license agreement secures for the target sufficient rights in the AI Outputs for how such AI Outputs are used in the target’s business. A buyer/investor will want assurances that a target owns and may exploit fully the code for any proprietary AI Tools. It is also important to understand the underlying AI Inputs on which an AI Tool has been trained (as well as any foundational models on which the AI Tool relies) and the nature and ownership of the AI Outputs.

Even if the target owns the AI Outputs, the AI Outputs are not necessarily protectable under IP law. For example, the U.S. Copyright Office has confirmed that copyright protection will only extend to material that is the product of human creativity. Whether IP protection for AI Outputs is a concern needs to be evaluated in light of how a target uses the AI Output in its business.

To address these issues, a buyer/investor can seek AI-specific representations, such as:

  • The target has not used AI Tools to develop any material IP in a manner that would materially affect the target’s ownership or rights therein.
  • The target owns or possesses all rights and licenses required under applicable law and contracts to use the AI Tools and AI Inputs and to generate the AI Outputs, in each case as has been used and is currently used and generated in its business.

The buyer or investor should also ensure that AI Tools (and related definitions) are incorporated into standard IP definitions, so that AI matters are covered by the standard IP representations. We expect there to be significant negotiation with respect to non-infringement reps applicable to third party large language models. Especially where a buyer is a more significant litigation target, infringement risks relating to the underlying large language model may need to be a source of business due diligence. See Checklist – Supplemental AI Due Diligence Checklist.

Data Privacy & Security

Since AI Tools require large amounts of data to learn, it is often costly, if not impracticable, to screen this data to ensure that it does not contain personal data. Data privacy laws, such as the General Data Protection Regulation (GDPR) in Europe and an increasing array of US state laws, give data subjects access rights in their personal data, which by definition is information that could be used to identify an individual. See Practical Guidance: GDPRU.S. State Overview Builder – Privacy & Data SecurityComparison Table – GDPR vs. State Comprehensive Consumer Privacy Laws.

These laws may require that individuals be notified of, consent to, and have the ability to opt out of, or remove, collection or use of their data. If personal data is included in AI Inputs, then data privacy laws will apply. Failing to identify and consider the impact of personal data on the development and use of an AI Tool may undermine a buyer’s rights or ability to use the AI Tool going forward.

A buyer should expand existing representations to ensure that AI Tools used by a target do not violate data privacy laws, for example:

  • Including AI Inputs in the data privacy-related defined terms, so that general data privacy representations matters apply to AI Inputs.
  • Expanding representations related to compliance with data privacy laws to address personal data processing by an AI Tool.

In addition to the data privacy issues identified above, AI Tools present issues with respect to data security—both the security and integrity of the AI Tools, as well as the power of the AI Tools to facilitate more sophisticated attacks by threat actors. It is essential to ensure that existing data security and information technology systems-related representations address these additional AI Tool-related data security risks. See Cyber Governance Toolkit.

Confidential Information

Concerns about restrictions on data use by AI Tools is not limited to personal data. Business information can be subject to restrictions on disclosure and use under agreements with third parties. These restrictions may be violated if an AI developer or user inputs such information into an AI Tool. In addition, a target could jeopardize the confidentiality of its own information by using it in connection with AI Tools, as this could result in unintended dissemination of such information to unauthorized individuals, loss of trade secret protection, or other similar risks. A buyer or investor should consider seeking representations related to uses of business information by AI Tools, such as:

  • The target has not included, in any prompts or inputs into any AI Tool, any trade secrets or other confidential or proprietary information of the target or any third party to which the target owes a duty of confidentiality.

Performance & Product Liability

A big question in AI is whether AI Tools function as designed. We expect this to be evaluated primarily through technical due diligence and testing, backstopped by representations relating to the function of AI Tools used by a target.

We also anticipate that product liability claims will increase as AI matures and that this will drive representations similar to performance-based ones. Companies that provide products to customers are generally susceptible to product liability claims; developers and licensors of AI Tools are no exception. Users of an AI Tool will expect it to work as described, and the developer or licensor could face claims if the AI Tool does not work in accordance with its product documentation. Regulators are focused on the operation of AI Tools in certain contexts, and exaggeration or over-promising the capability of AI Tools can lead to regulatory scrutiny and enforcement. There is also a risk that users of AI Tools may use them in a way that was not intended by such AI Tools’ developers and that leads to harmful or illegal results.

To backstop diligence of these aspects of the AI Tool, the buyer of an AI developer or licensor could consider seeking representations such as:

  • The AI Tools have operated in material conformance with their applicable documentation, and no customer has made any claims that any AI Tool has failed to operate in material conformance with such documentation.
  • There have been no errors, defects, failures, or interruptions in the AI Tool or AI Inputs that have had a material effect on the performance of the AI Tool’s intended purpose.
  • The target has technology or processes in place the verify and ensure the quality and accuracy of any predictions or output from its AI Tools.

Responsible AI & AI Governance

AI Tools reflect the AI Inputs on which such AI Tools rely. If AI Inputs are inaccurate, biased or otherwise flawed, this could impact the AI Outputs. One example of a challenge from a responsible AI perspective is that bias in AI Outputs can have harmful consequences, particularly for underrepresented or marginalized groups and individuals. This can lead to legal and ethical issues, which are magnified when companies use AI Tools to make decisions about human resources, health care, creditworthiness, or other sensitive topics.

Despite the potential for societal harm, there has historically been limited policing of bias in AI Tools. This is beginning to change. Legislators and regulators are increasingly seeking to regulate biases in the operation of AI Tools, including restricting or prohibiting certain use cases (such as is under the recently passed EU-AI-Act). For example, in June 2023, New York City enacted a law requiring that certain AI Tools that are used to make hiring decisions be subject to bias audits prior to use. In December 2023, the FTC filed a complaint and proposed settlement against Rite Aid, alleging that its AI-based facial recognition technology was used in a biased and unfair manner. The proposed settlement bans Rite Aid from using facial recognition technology for five years and requires Rite Aid to implement comprehensive safeguards to prevent AI bias and associated consumer harms.

Depending on the nature of the AI Tools related to a specific transaction, a buyer or investor may seek further assurances to bolster diligence on a target’s AI governance, including representations such as:

  • The target has taken commercially reasonable steps to avoid the introduction of inappropriate biases into the training of all AI Tools the target has trained.
  • The target maintains or adheres to industry standard policies, protocols, and procedures relating to the ethical and responsible use of deep learning, machine learning, and other AI technologies.
  • The target has not received any notice or communication from any governmental authority concerning the target’s collection or use of data.
  • The target has implemented and follows commercially reasonable AI governance with respect to the target’s development and use of AI Tools in a manner consistent with the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework or otherwise in a manner sufficient to address issues related to such AI Tools regarding validity and reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy-enhancements, and fairness (with harmful bias managed).

AI’s Impact on Covenants

Parties to AI transactions need to be aware of the potential for heighted government scrutiny, including prior government authorization requirements, and take this into account in drafting the covenants in their purchase agreements. The US government has utilized an array of tools to address AI-focused considerations, and AI has repeatedly shown up as an area of focus for the Biden administration.

In the foreign investment context, AI has been identified by the US government as a technology of heightened interest. President Joe Biden issued an Executive Order in September 2022 directing the Committee on Foreign Investment in the United States (CFIUS) to prioritize transactions involving AI, among other technologies.

The level of CFIUS risk will depend on the specifics of each transaction, including the risk profile (e.g., country) of the acquirer and the nature and scope of the target’s US operations. Depending on the facts at hand, the parties should be prepared to engage with CFIUS—either proactively (i.e., notifying CFIUS of the proposed transaction on a mandatory or voluntary basis) or be ready to respond to a CFIUS-initiated review.

On the antitrust front, bolstered by the Biden administration’s directive to use its existing authorities to ensure fair competition in the AI marketplace, the Federal Trade Commission (FTC) has been paying keen attention to AI transactions. The FTC has noted that, if a small number of players were to control essential AI Inputs, this could adversely impact competition in the generative AI space and allow certain players to have undue influence on economic activity. The FTC’s focus on AI was more concretely demonstrated in January 2024, when it ordered some of the leading tech industry players to provide information regarding their recent investments and partnerships relating to generative AI.

Similar to CFIUS, if a Hart-Scott-Rodino (HSR) filing is made in connection with an AI deal, the actual risk of government scrutiny will depend on the specific facts of the deal. However, as a general matter, parties should be aware that transactions in the AI space could be at heightened risk of receiving informational requests or even second requests.

Parties to AI deals should be aware of the risk of government scrutiny, even if merger control filings are not required. The FTC can scrutinize various types of transactions even if they do not meet HSR filing thresholds, and the transactions that were the subject of the FTC’s January 2024 order were not HSR reportable. In announcing such order, the FTC expressly noted that it was intending to understand whether such transactions distorted innovation and undermined fair competition.

In addition, the US Commerce Department has implemented a number of export control rules focusing on the infrastructure that supports AI. These include, among other things, controls on hardware and software that support or drive AI, which will need to be considered if a target utilizes any such items.

From a purchase agreement perspective, this means that the parties should consider the following:

  • Long Stop Date. Parties should set the long-stop date far enough from the signing date to provide adequate time to respond to regulators’ inquiries. We recommend at least 120 days from a U.S. perspective (potentially longer if ex-U.S. considerations apply). Separate from HSR and CFIUS considerations, where private third-party consents are required in connection with the transaction, the parties should allow extra time to obtain these consents, as third parties may take time to consider whether to oppose consolidation or new entrants into the AI field.
  • Hell or High Water. Buyers of AI businesses should seek to avoid agreeing to “hell or high water” provisions, under which the buyer is required to litigate or agree to government-imposed remedies in order to receive governmental clearances for the transaction. This is particularly true where the buyer already owns assets in the AI space, as this creates a heighted risk that the government may require the buyer to divest assets as a condition to, or otherwise seek to block, the transaction.
  • Early Analysis. Parties should analyze and agree early in the transaction process whether an HSR and/or CFIUS filing will be made, what other third-party consents will be sought, and the expected impact of these items on the transaction timeline.

Conclusion

There was a time, and some deal lawyers reading this article will remember, when “dot com” deals were new. Many of these early “dot com” deals did not have privacy representations, and the domain name ownership representations that are now common were “new.” Since that time, privacy and Internet-related representations have become familiar parts of deal negotiations. With AI, we are still turning that corner of familiarity. We hope this article is a helpful contribution to creating a common language for transactions in the AI space.

With assistance from Teisha C. Johnson and Sylwia A. Lis, Baker McKenzie

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.

Author

Bill assists global clients with transformational domestic and international mergers and acquisitions. He also leads Baker McKenzie’s North American corporate knowledge and training development program and has been an Illinois Super Lawyers Rising Star in Mergers and Acquisitions every year since 2012.

Author

Michelle Carr is a partner in the Corporate & Securities Practice Group of Baker McKenzie's Chicago office.

Author

Elizabeth is an associate in Baker McKenzie’s Chicago office, where she works with public and private companies at all stages of the business lifecycle in a variety of transactional matters. Elizabeth advises clients in various industries, including technology, consumer products, manufacturing and food and beverage.

Author

Teisha Johnson is a member of Baker McKenzie's antitrust practice in Washington, DC. She advises clients on a wide range of antitrust and e-discovery matters, and has considerable experience counseling clients in government investigations, proposed mergers and acquisitions, compliance, and litigation matters.

Author

Sylwia has extensive experience advising companies on US laws relating to exports and reexports of commercial goods and technology, defense trade controls and trade sanctions — including licensing, regulatory interpretations, compliance programs and enforcement matters. She also has advised clients on national security reviews of foreign investment administered by the Committee on Foreign Investment in the United States (CFIUS), including CFIUS-related due diligence, risk assessment, and representation before the CFIUS agencies.