chevron-down Created with Sketch Beta.

Business Law Today

January 2024

Diligencing AI-Enabled M&A Targets: Seven Things to Understand

James Jian Hu, Karl Gao, and Yixin Gong

Summary

  • Legal due diligence of AI-enabled targets needs to be tailored for the particular AI tools used by the target. Buyers should examine the type, function, provenance, and use of the applicable AI tools.
  • Buyers should understand how the applicable AI tool is trained and verify that the target has necessary rights to use any material AI-generated outputs.
  • If the target uses proprietary AI technologies, diligence should confirm that appropriate contractual and technological safeguards have been instituted to protect those technologies.
  • Risk assessment should account for potential liabilities arising from use of the AI tools and their outputs, including review of risk allocation provisions in applicable vendor agreements and customer terms, along with the target’s compliance readiness for evolving AI regulations.
Diligencing AI-Enabled M&A Targets: Seven Things to Understand
iStock.com/Wirestock

Jump to:

As artificial intelligence (“AI”) becomes more prevalent in business processes and service delivery across industries, it is increasingly important for M&A buyers to familiarize themselves with the legal nuances associated with the use of AI technologies. In this article, we explore seven key areas of inquiry for an M&A buyer when conducting legal due diligence on a target company that uses AI in its operations.

1. Type of AI used and how the target is using it

AI can be used for a wide variety of functions and applications. At the outset, it is important to understand what types of AI tools, systems, models, and technologies the target company is using, the provenance of such technologies (e.g., are they proprietary or licensed from a third party?), and how they are being used. Are they used internally only or in the delivery of products or services? Will they be business-to-business and/or consumer-facing? The answers to these questions will inform due diligence strategy and assist M&A buyers in assessing the target company’s risk profile.

Further, AI is not one type of technology. Generative AI—AI that creates new synthetic content or data, like text, images, audio, video, and source code, after being trained on large datasets and often using large language models—has caught the attention of the world, but it is only one of many kinds of AI. Deal team members should identify the types of AI being used by the target company and develop a tailored due diligence plan to understand the legal implications of the target’s AI-enabled operations and offerings.

2. How the AI is trained and rights to input data

Most AI technologies (including generative AI) require access to large datasets in order to train the AI’s “foundation models.” If the target company uses AI technology provided by a third-party vendor, the M&A buyer will need to diligence the vendor contract or applicable terms of use to analyze both the commercial arrangement between the target company and vendor and how the vendor’s AI technologies were trained (e.g., using what datasets accessed with which rights). Where the target company is providing protected, confidential, proprietary, or otherwise commercially sensitive information or data (e.g., personal data) to the third party, whether to further train or fine-tune such AI technology or via prompts (i.e., queries), buyers should also assess how the target company has addressed risks associated with this. For example, they should consider how the vendor contract permits the vendor to use such information and data, how the vendor is required to secure and protect the information and data (including retention and deletion obligations), and what guarantees (if any) the target company is making with respect to such inputs. Analyzing the nature and source of the training data (including any associated rights and other disclosures or consents) may also be warranted when the target company is training its own proprietary AI models. As further discussed in Section 6, legal obligations with respect to data protection laws and regulations still apply (despite the evolving regulatory landscape of AI regulation—see Section 7).

3. Rights to the AI-generated output

Where a target uses generative AI to create outputs, the M&A buyer should diligence the materiality of those outputs on the target’s business and whether they can be protected against third-party use. For example, consider whether the AI tool and outputs will be used internally only, whether the target company or possibly even the M&A buyer may wish to incorporate the AI technology or AI-generated outputs into its own products and services, and whether the value of the target company is dependent on having exclusive rights to (or the ability to exclude others from using) the AI-generated output. Moreover, where third-party AI is used by the target company, vendor contracts should also be analyzed to confirm whether the target company has the necessary rights to use the AI technology and its outputs—both pre-acquisition in its current business (e.g., commercially) and post-acquisition as the M&A buyer intends to use them.

Copyright and patent laws in the majority of jurisdictions (including the US, UK, Australia, and Europe) do not currently protect works or inventions created solely by AI. Accordingly, if AI-generated outputs comprise all or a part of any material assets or operations of the target company, it will be important to determine to what extent there was human involvement in their creation, what intellectual property rights the target company may have to them, and what other measures the target company has taken to protect them (e.g., contractual protections). Buyers should also undertake a review of the contractual terms applicable to such outputs (whether under the vendor terms or the commitments the target company itself may be making with respect to the AI-generated outputs).

4. Risk allocation

If the target company uses AI-enabled tools or technologies—whether proprietary or from a third party—on a commercial basis, the M&A buyer should carefully assess the potential risk associated with use of the AI or its outputs, including review of any applicable vendor contract to understand how such risk is allocated.

For example, if the AI model was trained on copyrighted works, the model could reproduce copyrighted material in its output. Many vendors have started providing certain contractual protections and indemnifications in this regard. As another example, if the target company relies on a third-party AI tool to deliver products or services to its customers and the AI tool malfunctions (e.g., hallucinates in a chatbot context), the target company may be in breach of commitments it has made or be liable for any harm or damage resulting from its customers’ use of erroneous outputs. From an M&A buyer’s perspective, it is therefore important to understand the scope of the target’s (and vendor’s, if applicable) warranties, limitations of liability, and indemnification obligations, as well as its creditworthiness. In addition, an M&A buyer should also review insurance policies the target is carrying that could cover potential instances of third-party claims.

5. Protection of proprietary AI technology

If the target company developed the AI tool and it confers a competitive advantage or is otherwise material to the target company’s business, the M&A buyer should seek to understand how the company aims to protect the AI technology from use by others. This inquiry will often be similar to due diligencing other proprietary intellectual property of the target company, including reviewing policies and procedures, employment and contractor agreements, location of development, etc.

Under intellectual property laws in the United States, AI technologies may be protectable through patent, copyright, and trade secret laws. The US Patent and Trademark Office recognizes AI as a class in its patent classification system, but given the nature of AI inventions, there are challenges to satisfying the subject matter eligibility and enablement elements required for patent protection. Copyright protection may be available, but only to certain aspects of the AI model (e.g., original expression of source code), and the visual elements of an AI system may be protectable, but functional aspects (like algorithms) are not. So, often, AI models are best protected as trade secrets. As a result, acquirers should confirm that the target company has taken reasonable measures (including reasonable legal, physical, and technological measures) to protect and maintain the secrecy of its AI models, including maintaining reasonable information security policies and procedures, as well as securing appropriate nondisclosure agreements from personnel and third parties with access to the information. Using reasonable measures to protect the secrecy of a trade secret is not just a legal requirement to maintain a trade secret’s protected status under US law but also an operational safeguard to ensure trade secret information does not (directly or indirectly) fall into the wrong hands.

6. Cybersecurity and data privacy considerations

If personal information or other regulated information is used by the target in connection with its AI technology use, diligence should include a review of at least the following: the target’s data privacy policies and cybersecurity practices; whether the target’s AI technology use is consistent with applicable privacy policies, law, and regulation; where such data or information is stored; the security measures in place to safeguard against breach; and insurance coverage applicable to breaches. In the US, there are many state comprehensive privacy laws, including the California Consumer Privacy Act as amended by the California Privacy Rights Act, the Virginia Consumer Data Protection Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, and the Utah Consumer Privacy Act. In addition to comprehensive privacy laws, there are sectoral laws that are relevant to privacy and AI, including the Biometric Information Privacy Act in Illinois that covers the use of biometrics and has extremely high penalties. The target should be able to describe the nature of the relevant data and how the target obtained it, as well as applicable contracts, user consents, or disclosures governing such data (including compliance with any use restrictions that apply to the data), as applicable.

If the AI technology has been provided by a third party, not only will the target’s practices be relevant, but vendor contracts or applicable terms of use should also be reviewed to ensure there are appropriate vendor obligations addressing data privacy (see Section 2) and cybersecurity.

If the target company operates across different jurisdictions, then inquiries should also be made about the measures the target takes to comply with cross-border data transfer requirements. If the target uses training data sourced from multiple jurisdictions, the M&A buyer should confirm that the cross-border data transfers conformed to established compliance standards and protocols.

7. Compliance support and the changing regulatory landscape

As described above, acquisition of AI-enabled M&A targets involves nuanced legal considerations. In addition, the regulatory landscape with respect to AI is rapidly evolving; for example, Europe reached political agreement on the EU’s AI Act on December 8, 2023, and US President Joseph Biden issued an executive order on “safe, secure and trustworthy” AI use on October 30, 2023. The frameworks, regulations, and legislation being introduced or discussed around the world involve varying approaches, such as differing definitions of AI, targeting slightly different issues, and differing approaches to enforcement and liability. M&A buyers should consider what systems and processes the target company has in place to oversee its use of AI and the challenges posed by such technologies (e.g., systems to identify and minimize bias and to ensure safety, transparency, and human oversight). They should also consider what representations the target company is making about its AI usage.

Some major frontier AI companies have spent tremendous resources and built strong teams to tackle the challenges of compliance with regulatory requirements and to address ethical issues arising from the use and development of AI. Accordingly, it is important to examine the target company’s organizational supports and systems to not only comply with, but also to be able to adapt to, the evolving regulatory landscape, and to address existing and future regulatory compliance.

Hope Anderson, Burr Eckstut, Arlene Hahn, and Erin Hanson, partners of White & Case LLP, also contributed to this article. Any views expressed in this publication are strictly those of the authors and contributors and should not be attributed in any way to White & Case LLP or NIO.

    Authors