©2024. Published in Landslide, Vol. 16, No. 4, June/July 2024, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.
July 10, 2024 Feature
Getting Ahead of New Risks in Commercial Transactions in the Age of AI
Ariel Seidner
The pervasiveness of artificial intelligence (AI) is transforming the commercial transactions landscape. Providers across industries are looking to utilize third-party AI tools, or utilize customer data to train AI models, in connection with providing services or implementing use cases proposed by their customers to create efficiencies and cost savings. The intellectual property (IP) stakes are heightened, and parties on either side of a transaction will need to carefully leverage agreements to maintain IP rights in their own data, secure IP rights in resulting products, and protect themselves against claims of infringement.
While AI has been prevalent in various forms for decades, it was catapulted into current popular culture with the recent rapid rise of generative AI (GenAI). GenAI models are capable of processing and learning from enormous amounts of data (training data), ultimately generating new content (output) based on, and informed by, the training data and data input by users (which in some instances may then be used as training data). Recent innovations using GenAI generate output including text, images, code, and other content in response to a user’s input of natural language prompts. These user-friendly models render many GenAI tools readily available and accessible—attributes that have only bolstered the appeal and ubiquity of these tools.
The increased prevalence of GenAI and its capabilities has resulted in a critical marketplace shift with increased demand for AI tools to be made available in different capacities and with different use case objectives. The rapid rise in demand has been met with a rise in supply, and notably the different offerings are made available with respectively differing license terms. For an obvious example of the impact that demand has had on AI provider offerings, one need not look further than at the mere frequency with which the license terms are revised—a positive indication of an industry-wide trend that does not alienate users but rather accommodates users and the concerns they have seen play out in real time as AI commercial utilization has sparked regulatory investigation and litigation across industries (which in no small portion includes IP). This trend likewise indicates a trend toward accommodating operational dilemmas companies face as internet-based offerings evolve with respect to both the data that is accessible and the tools that are available.
This article will address concerns associated with increased business use of AI and identify some tips and best practices that can help inform a strong company strategy to protect data and IP in today’s GenAI world.
Have a Strategy and Policy
The impact of technology revolutions on business operations is not new. Decades ago, as information moved from paper to electronic storage, companies became privy to the major risk this move presented to their data and proprietary information—especially with the proliferation of smartphones. One solution was to issue company-owned devices, but soon the term “BYOD” (or “bring your own device”) became a commonplace commercial concept and is frequently a favored alternative solution, as a properly implemented BYOD policy allows company personnel to use their personal devices for business purposes. Either way, the takeaway is that advancements in technology presented a concern, and the business world adapted. Today, corporate information technology policies standardly include a BYOD policy alongside other policies designed to protect confidential business and proprietary information, such as policies regarding system use, network access, and electronic communications.
AI (including GenAI) follows suit. Having an AI policy allows a company to streamline the AI governance process for purposes of protecting its data and IP, and in turn can be leveraged as a tool to manage costs with service providers. A strong AI policy answers important questions and creates a framework within which both company personnel and service providers must operate to ensure that the company’s data and IP are protected.
The policy should give clear guidance, including:
- Who the policy applies to
- Who to contact with questions about the use of AI
- How and when AI tools or services may be used within the company
- Which elements of the company’s data and IP can or cannot be disclosed during the use of AI tools (i.e., as prompt input)
- Actions that must be taken before making use of any AI output
- How input may (or may not) be used outside of the company (i.e., as training data for the AI tool being utilized)
A sampling of terms that, if properly implemented, can help form a strong AI policy is provided for informational purposes only, as follows:
Who This Policy Applies To: This Policy applies to all of the Company’s employees, staff, personnel, contractors, and service providers. This Policy supplements, and does not modify or replace, all other applicable policies, standards, and guidelines.
Use of Generative AI: No Company data or information of any kind can be input into a Generative AI tool without express written permission and instruction from the Company. Use of Generative AI tools that is authorized by the Company for specific business purposes remains subject at all times to the Company AI Policy and applicable Company instructions and usage guidelines.
Output from authorized Generative AI tools should always be carefully reviewed for accuracy, completeness, and appropriateness, and must only be used in accordance with the Company AI Policy and applicable Company instructions and usage guidelines.
Vendor Assessments: It is important to remain aware that all service providers, regardless of industry or function, may use AI tools to provide their respective services. Always follow the vendor assessment guidelines outlined in this Policy to ensure that the service provider’s use of AI tools is known and documented, and that where such use is authorized, the service provider will only be furnished with the specifically authorized data and access privileges.
A strong AI policy not only educates the reader about the use of AI but also raises awareness about the importance of complying with the policy. If crafted and implemented carefully, an AI policy can be a critical tool in a company’s IP-protection toolkit.
Template Updates and Contract Clauses
AI model advancement through machine learning requires massive datasets in order to maximize complexity, efficacy, and utility. However, the availability of such datasets is a factor that will either permit or limit advancement of the model. The extent to which an AI model is exposed to the right type and quantity of data and can use that data for training within the active machine learning process is perhaps the most determinative factor of the model’s advancement.
It is therefore no wonder that the purveyors of sophisticated AI models would seek to utilize relevant data to increase the sophistication of the model’s capabilities. The concern, however, becomes more apparent and striking when considering that an AI model intended to be used in a specific industry would naturally be best trained using data that bears relevance to that model’s intended use. However, it is important for companies to make informed decisions about the data they make available and the manner in which it is used.
Having pre-scrutinized contract language can be massively useful in streamlining commercial transactions. Taking proactive measures to protect a company’s confidential and proprietary information is nothing new. Whether it is through a nondisclosure agreement with a potential vendor or a policy for employees about permitted use of personal and company-owned electronic devices to access company information, the concept remains the same: protect company data and IP.
This holds true across what has standardly been included in agreements for the provision of licensed software and other technology services. For example, service providers commonly expressly include in their license terms that the customer gives permission to use the customer’s data (perhaps in anonymized and/or aggregated form) for development purposes and to improve the services. It is likewise common for companies to require that software developers or providers first seek approval before incorporating free and open-source software (FOSS) or other third-party materials into a developed work or otherwise utilizing FOSS in a way that could implicate the company’s proprietary code—this preempts the known risk that some FOSS is made available under license terms that could compromise the company’s IP rights in software if the software makes certain use of such FOSS (these license terms are known as “copyleft” terms). This concern is exactly why companies do not simply incorporate any FOSS into their IP without first carefully scrutinizing the license terms.
The same goes for GenAI, in the sense that GenAI can be a useful tool for a company or its vendors to provide important services at a fraction of the cost. But, in the same way that it is prudent to know what license terms apply to FOSS to avoid unknowingly implicating copyleft terms, it is equally prudent to know how data input into an AI tool will be used to avoid unknowingly making the data available as training data for an AI model that may be accessed outside company walls.
Updating template agreements to address service provider use of certain AI tools is a readily available step companies can take. For example, a company can require service providers to disclose their use of certain AI tools, require authorization prior to the use of certain AI tools, and limit the type of data that may be used in AI model training (or otherwise define how such use can be made).
A sampling of terms that, if properly implemented, can help supplement a strong updated vendor agreement is provided for informational purposes only, as follows:
No Use of AI Tools without Prior Approval: Supplier will not use any AI Tool (as defined) in providing the Services and/or Deliverables, or otherwise performing its obligations under this Agreement, without prior written approval of Company.
Where AI Tool Use Is Authorized: To the extent that Supplier is authorized by Company to utilize AI Tools in connection with its performance under this Agreement, Supplier shall at all times comply with Company’s AI Policy, which Supplier acknowledges may be amended by Company from time to time. Supplier will not, and will not cause any other party to, use Company data to train or fine-tune an AI model.
Software Improvement Exception—No Use of Company Data to Train AI Models: To the extent that Supplier has rights to use Company data for purposes of providing, maintaining, or improving the Services, such rights notwithstanding, Supplier is expressly prohibited from using Company data to train or fine-tune an AI model.
Proactively tailoring the terms of an agreement can allow a company to exercise greater control over how its data is used when engaging service providers. To the extent that a company finds itself using a service provider’s paper—and without the leverage to use its own template—the company that knows to look out for these considerations is still better equipped to protect its data and IP.
Conclusion
AI (specifically, GenAI) is here to stay, and is something every company in every industry should have its eye on. Even if a company is not purposely looking to use AI in any capacity, the fact that service providers across industries are leveraging AI to streamline their own operations and expenses means that companies are likely to become exposed to the risks whether they intend to or not.
The good news is that with the right proactive measures and armed with the right awareness, companies can build up the tools in their toolkits to protect the commercial assets they value most.