chevron-down Created with Sketch Beta.

TortSource

TortSource Article Archives

Expert Q&A on ChatGPT, Generative AI, and LLMs for Litigators

Rawia Ashraf, Jessica Brand, and Lauren Sobel

Summary

  • Attorneys can use AI for numerous functions, including searching and retrieving information, drafting and summarizing content, and answering both broad and narrow questions.
  • Litigators should think of LLMs as tools to enhance their delivery of legal services, rather than tools that can replace them by delivering legal work product without any attorney involvement.
Expert Q&A on ChatGPT, Generative AI, and LLMs for Litigators
primeimages via Getty Images

Jump to:

What are the basics litigators should know about how generative AI and LLMs work?

Generative artificial intelligence (AI) is a type of AI that generates new content or data in response to a prompt, or question, by a user. Large language models (LLMs) are an advanced form of generative AI that are the basis for generative pre-trained transformer (GPT) platforms, such as ChatGPT. LLMs can process and generate natural language text in a seemingly human manner. To use ChatGPT (and similar platforms), a user types in a research question or requests information, sometimes based on documents, images, or other information the user provides, and ChatGPT provides a response written in natural language as if a human had written it.

LLMs are trained by using vast amounts of data from a range of sources, including books, blogs, news articles, Wikipedia information, social media posts, and other website content. LLMs are general purpose models that “understand” a wide variety of domains and language constructs because of the diversity of the data on which they are trained. LLMs are capable of a multitude of functions, including searching and retrieving information, drafting and summarizing content, and answering both broad and narrow questions. It is important to recognize that when generative AI is producing a response to a prompt, it is predicting (based on its knowledge of language patterns) what words are most likely to come next in response to the prompt. It is a tool optimized to synthesize content, not necessarily to recall facts. This is what distinguishes it from popular and commonly used search engines.

LLMs such as GPT-4 can, for example, to a surprisingly impressive standard:

  • Explain quantum computing in simple terms.
  • Perform tasks such as writing code for an application.
  • Summarize an academic research paper.
  • Create study questions for a school subject.
  • Draft a screenplay.

Perhaps most impressively, this technology can also iterate on the content it has generated. For example, after drafting a screenplay, it can take feedback from the user and make the play more dramatic or humorous, or introduce a specific character, based on new prompts.

Legal software tools are beginning to incorporate LLMs such as GPT-4 to help litigators become more effective and efficient. However, attorneys must be careful in selecting when and how to use these technologies. As LLMs continue to learn legal-specific information, and as more law firms and companies begin licensing the right to use LLMs with terms of use that adequately protect confidential client information and attorney work product, litigators will be able to harness these tools in a variety of significant ways.

While the intersection of generative AI and the legal industry is new, it is essential for legal professionals to stay up to date on this emerging technology and responsibly experiment with it to form their own views on its pros and cons. Ultimately, to fulfill their ethical duties, litigators must understand how to use this emerging technology, as well as how not to use it, which requires a basic understanding of its capabilities (see What are the primary ethical pitfalls of using generative AI and LLMs, and how can litigators avoid them?).

What are potential common uses of LLMs in the litigation context?

Litigators should think of LLMs as tools to enhance their delivery of legal services, rather than tools that can replace them by delivering legal work product without any attorney involvement. Currently, LLMs are an emerging technology that hold incredible promise for the enhanced delivery of legal services in both the near and distant future.

Although not without risks, LLMs may be particularly helpful to litigators and in-house counsel overseeing litigation because of their ability to:

Summarize transcripts, legislation, and other documents. Currently, with the right set of prompts, LLMs can often summarize content on par with a human’s ability to summarize. However, most existing LLMs are generalists, with broad training on many topics but not in a specific area of expertise. As legal-specific LLMs emerge, they are likely to perform increasingly well when summarizing legal documents, such as:

  • Pleadings.
  • Deposition transcripts.
  • Court transcripts.
  • Proposed legislation.
  • Statutes.
  • Regulations.
  • Other dense or lengthy legal documents.

For example, litigators who want a simpler explanation of a complex expert opinion or statute could use an LLM application to summarize it in natural language.

A less obvious but potentially critical use for litigators is employing LLMs to summarize and then essentially analyze information. Law firms and companies that license the right to use legal-specific LLM platforms may be able to create their own interface, where they can upload documents and instruct the application to search for key people or events. For example, a litigator could upload a set of deposition transcripts and instruct their LLM platform to read and summarize them while paying special attention to certain key events the deponents mention, based on which the LLM, if properly trained and given examples to imitate, could then:

  • Identify topics that lack sufficient testimony and need further development.
  • Highlight discrepancies in how witnesses describe an event.
  • Return its results in a table that identifies the witness, the timestamps for the testimony, the statements they made, and how those statements varied from other witnesses’ testimony.
  • Assist in document review and electronic discovery (e-discovery). In addition to answering questions about public data, LLMs are capable of extracting information from private data that users input and upload.

One application of this information retrieval capability could be in e-discovery, where an LLM-powered application could respond to specific questions from an attorney with a synthesized response, as opposed to simply returning documents that match a user’s search query. For example, in an antitrust case alleging collusion between two competing companies, an attorney could ask an LLM to review all the documents in a production and formulate an answer about how the companies were involved with each other.

Additionally, litigators may find it useful to have an LLM develop a case chronology and identify exhibits supporting a particular set of facts or legal theory with citations to the record. At a later stage, when drafting a motion, a litigator could ask the AI to identify all exhibits that support a particular assertion of fact with citations to the record.

Draft documents. One of generative AI’s core capabilities is generating new content or data in response to a prompt. ChatGPT and other LLM platforms are adept at drafting content that appears to be written by a human with general knowledge of the subject at hand. However, when specialized knowledge is needed, such as in legal drafting, these platforms do not yet appear ready to perform these kinds of tasks. This is because these platforms are known to “hallucinate,” meaning they provide made-up or factually incorrect answers with a high degree of confidence. Although the results may appear to include accurate information, on a close reading and further research, counsel may find that procedural tools, legal theories, and even citations contained in the response do not actually exist. Additionally, litigators often need to adapt their style of writing based on the type of document they are drafting (for example, a litigator may take a more adversarial tone in a motion to dismiss than in a letter to a judge requesting continuance of a hearing). ChatGPT and similar platforms do not yet appear fully capable of capturing these types of nuances on their own, but if given examples of the style or tone desired, the AI can imitate the example.

Lastly, for generative AI to be useful in legal drafting, it must be trained with volumes of legal-specific data, such as motions, briefs, opinions, contracts, and statutes. At this stage of relative infancy, this technology is probably best suited for creating first drafts of non-legal content, such as emails, presentations, marketing materials, and other non-legal documents needed in the ordinary course of business. However, as models are fined-tuned on legal data and resources, they will become good collaborators even on first drafts of legal work product.

Perform legal research. LLMs can sometimes produce acceptable results when asked a simple legal research question, such as a request to identify the elements of a tort. However, litigators must keep in mind the current tendency of the technology to hallucinate and provide incorrect answers. LLMs are known to pull inaccurate information even for basic questions. For example, they may pull information from the wrong jurisdiction or from a case that is no longer good law. For these reasons, litigators should exercise extreme caution when using general LLMs for legal research and should use their expertise to review and analyze the results to determine their accuracy. Again, as legally trained LLMs enter the market, meaning LLMs that are trained to perform legal research and have access to mature legal content sets, litigators should expect to benefit from the efficiency and time savings these tools provide.

Identify patterns and predict litigation outcomes. Litigators considering whether to file in a particular court, remove a case to federal court, make a particular motion, or settle a case may find LLMs’ capabilities to be especially useful. Generative AI has strong predictive capabilities. Using generative AI to help derive insights from large sets of legal data could advance litigation analytics well beyond existing capacities. Litigators can use the technology to identify patterns in how cases are settled or decided. LLMs can analyze patterns in past cases and predict outcomes of future cases, including the likelihood of success of a particular argument before a particular judge.

Improve access to justice. Over time, as generative AI becomes better trained on performing legal tasks, it could create tremendous improvements in access to justice. Generative AI can help to mitigate some of the barriers to access to justice, including:

  • Lack of knowledge about one’s rights or the law.
  • Unequal access to trained legal professionals, including for financial reasons.
  • Limited availability of pro bono services from attorneys with insufficient time or resources.

What significant risks do generative AI and LLMs present in the litigation context?

Generative AI and LLMs have several known limitations and weaknesses that litigators and other legal professionals should be acutely aware of, in addition to yet uncovered limitations and weaknesses. To understand the risks, litigators should consider two categories of risk: output risk (for example, the information created may be risky to use) and input risk (for example, the information supplied to the model may be at risk.) The most prevalent output risks are:

  • Inaccuracy. As mentioned above, GPT models are known to hallucinate, meaning they provide incorrect answers with a high degree of confidence. Because these models are not able to reason as human beings do and are not always knowledgeable about the topic they are discussing, they are known to produce false or nonsensical answers. This may occur, for example, because the model has insufficient training data about a particular subject matter. The possibilities of hallucination, combined with the lack of legal domain knowledge in most LLMs, make it particularly risky for litigators to rely on information produced by an LLM. However, the risk of inaccuracies and hallucinations should decrease as legal-specific data is added to train LLMs, a process called fine-tuning. When LLMs are fine-tuned with legal information, they will become more familiar with legal language, concepts, and patterns, and the accuracy of the information they provide should increase substantially.
  • Bias. LLMs, similar to any type of AI, can be biased. If biases exist in the data used for training the AI, biases will inform the content that AI generates as well. Models trained with data that are biased toward one outcome or group will reflect that in their performance.

The biggest input risk is a breach of confidentiality. A key risk in using LLMs involves the attorney-client privilege, attorney work-product doctrine, data security, and confidentiality. As an overarching principle, litigators planning to use LLMs should make sure the platform they are using does not retain or allow third parties to access the data. For example, one way to accomplish this may be by entering into a licensing agreement (with the AI provider or the platform that incorporates the AI) that contains strict confidentiality provisions and explicitly protects the information the user uploads from being retained or accessed. Unless and until these kinds of protections are in place, legal professionals should not put sensitive, confidential, or privileged information into a public model, such as ChatGPT (see also What are the primary ethical pitfalls of using generative AI and LLMs, and how can litigators avoid them?). However, platform developers are already starting to roll out new functionalities to address these kinds of privacy issues, such as by allowing users to turn off their chat histories and prevent the information they enter from being used to train the platform.

What are the primary ethical pitfalls of using generative AI and LLMs, and how can litigators avoid them?

Most critically, litigators and other legal professionals must recognize that generative AI is a tool that can help make legal work more efficient but does not replace human expertise. Litigators need to remember that they are ultimately responsible when they use generative AI in their legal work. Much like the ethical violations they can face for the conduct of non-attorneys they supervise or who act at their behest (see American Bar Association Model Rule of Professional Conduct (ABA Model Rule) 5.3), litigators who use technology such as generative AI to assist them in legal work without proper oversight can face a multitude of ethical violations (see ABA Model Rule 5.3 cmt. 3 (requiring attorneys to use reasonable efforts to ensure that any services they use outside of a law firm are provided in a manner compatible with the attorney’s professional obligations)).

Litigators must apply their legal judgment and experience when using generative AI, just as when using another type of AI or even any other type of technology. For example, because generative AI can produce seemingly reasonable answers that are incorrect or omit critical information, trained and experienced attorneys must still be involved to review and carefully analyze the results that generative AI produces, and to use their expertise and legal skills to identify and correct these types of issues.

All attorneys, including litigators, also have an ethical obligation to understand a technology before using it, at least to some degree. This obligation stems from an attorney’s ethical duty to provide competent representation to a client, also known as the duty of competence (see ABA Model Rule 1.1). The duty typically requires an attorney to have the knowledge and skill necessary for the representation, which in turn means that the attorney must keep up with changes in relevant technology and reasonably understand its benefits and risks (ABA Model Rule 1.1 cmt. 8). A growing number of jurisdictions have specifically incorporated a duty of technological competence into their ethical rules, making this ethical rule even more relevant in the context of generative AI.

To help gain a reasonable understanding of how generative AI works before using it to conduct legal work, litigators should familiarize themselves with how the platform was trained, the known limitations, the types of tasks for which the platform can appropriately be used, and the quality of the information it produces.

Protecting confidential information is another area of significant ethical concern when using generative AI. This is because generative AI by its nature is designed to be trained (or learns) at least in part based on information that users provide. As a result, when litigators use generative AI to help answer a specific legal question or draft a document specific to a matter by typing in case-specific facts or information, they may be sharing confidential information with third parties, such as the platform’s developers or other users of the platform.

This is problematic because, with limited exceptions, attorneys:

  • Cannot reveal confidential information relating to the representation of a client absent the client’s informed consent.
  • Must make reasonable efforts to prevent unauthorized disclosure of or access to information relating to the representation of a client.

(ABA Model Rule 1.6(a)-(c).) The scope of the confidential information protected by this ethical rule is broader than the scope of confidential information in the context of the attorney-client and work product privileges (see ABA Model Rule 1.6 cmt. 3).

Unless and until an attorney can be reasonably sure that the information entered into a generative AI platform is not accessible to third parties, such as through licensing agreements or terms of use that make it clear that developers and other users cannot view or access the information, attorneys should avoid putting any sensitive information into ChatGPT or other publicly available LLMs. However, as developers continue to actively address confidentiality issues by rolling out new data controls, such as those that may allow users to prevent the information they enter from being used to train the platform, this may become less of a concern.

What types of procedural and substantive issues are likely to arise in litigation stemming from the use of generative AI and LLMs?

From a procedural perspective, the content LLMs generate, and the prompts entered by users to retrieve information, are sure to create new discovery and evidentiary issues, or at least test the limits of existing evidentiary rules as other new technologies, such as social media or ephemeral messaging, have done. For example, courts will likely face the issue of whether to admit evidence generated in whole or in part from generative AI or LLMs, and new standards for reliability and admissibility may develop for this type of evidence.

Prohibitions against using information produced by generative AI may also rapidly develop in regulated industries, such as the banking and finance industries, where the underlying basis for the information may not be clear or easily explained to regulators. An increase in class action lawsuits by plaintiffs ranging from consumers to artists in various areas of law is also likely.

As far as the types of substantive litigation that may proliferate around this emerging technology, generative AI use almost certainly will increase (and in some cases, has already triggered) litigation involving many different areas of law. For example, foreseeable litigation includes that involving:

  • Copyright issues, such as whether it is a fair use of copyrighted material to train generative AI, whether art generated by LLMs can be copyrighted, and whether an LLM platform itself can be liable for copyright infringement.
  • Data privacy issues, such as data breach litigation and violations of the EU General Data Protection Regulation (GDPR) (for example, violations of the “right to be forgotten”) stemming from an LLM’s collection and storage of personal information.
  • Consumer fraud claims based on companies’ use of generative AI to create fake reviews or otherwise market goods or services and failure to disclose that use to customers.
  • Defamation claims stemming from hallucinations or false information generated by an LLM.
  • Legal malpractice claims stemming from a variety of improper behavior, such as an attorney’s use of the tool as a replacement for their expertise and reliance on inaccurate information without properly reviewing it, failure to obtain a client’s consent to the use of generative AI in the client’s case or failure to disclose that use to the client, or improperly billing the client for time spent on work that was actually done by generative AI.

Reprinted with permission from Thomson Reuters Practical Law. © 2023 by Thomson Reuters. All rights reserved. Practical Law is an online legal solution that provides access to how-to guides, templates, checklists, comparison charts, and more, all written and maintained by experienced attorneys. Quickly get up to speed and practice efficiently with Practical Law.

    Authors