What Happens When One Part Is Missing?
“Garbage in, garbage out” is certainly one of the most time-tested rules regarding data and technology, and it’s certainly having a moment amid popular early impressions of GenAI’s capabilities.
The widespread availability of early forms of the public-facing GenAI tool ChatGPT was useful because anyone could test it out, which encouraged experimentation and drove initial innovation. Lawyers testing ChatGPT soon found that it could create legal documents that sounded very authoritative; however, they also found out that reliance on them could have real consequences.
The openly available versions of GenAI are really just sentence-completion engines trained on an unfiltered base of data from the Internet. ChatGPT has no intelligence of its own; rather, it’s good at drafting language that sounds like a plausible response to a user’s prompt.
Because of the lack of native intelligence, ChatGPT may sound like a lawyer, but it doesn’t deliver the accuracy and precision that lawyers require. It can, and has, cited non-existent sources and invented facts. These examples are called hallucinations and include the now-infamous case in which a New York lawyer submitted a brief supporting a motion that included citations of non-existent cases and other legal inaccuracies—but it sounded good. He had simply asked ChatGPT to write his brief, with disastrous results.
ChatGPT has neither the data training nor domain expertise to be reliable. As a result, it can very easily introduce errors into its responses. It simply doesn’t know better.
The possibility of such errors presents an unacceptable level of risk for legal professionals. So, how will lawyers, especially those at smaller firms, effectively use GenAI while maintaining all their client and ethical obligations?
Authoritative Legal Data Is the Backbone of GenAI
Beyond just selecting the words for a response, technology also provides part of the answer for how lawyers can effectively use GenAI. Retrieval augmented generation (RAG) is one method designed to improve the response quality in GenAI systems. In products that use RAG, the user’s prompts or queries do not pass directly through to the underlying large language model (LLM). First, the question runs as a search against a trusted body of content—for example, verified legal content from a legal publisher or trusted documents from the user’s organization.
Documents relevant to the question are retrieved first, and the question and the verified content are passed on to the LLM for processing. This ensures that the answers to users’ queries are grounded in trusted domain-specific data, not just a random sample of data (however large) from the wider Internet.
The idea that legal GenAI means turning over legal decision-making to a machine overlooks the importance of the data portion of the equation. A properly trained and targeted GenAI application for legal use will build on the work of lawyers in the form of cases, statutes, briefs and memoranda, how-to guides, contracts, client advisories, templates—all the esoteric and robust data that results from lawyers’ expertise and knowledge.
All law firms, even smaller ones, have an opportunity to leverage their own data assets against competitors at scale once their lawyers identify and curate the appropriate critical data sets from within their firm and embed them within the AI solution. This proprietary data can be combined with larger pools of data from authoritative external sources, such as trusted legal content sources, to further strengthen the responses that the GenAI tool provides.
Legal Domain Expertise Is Vital for Success
Lawyers provide a layer of trust between clients and the systems that generate their various legal deliverables. Clients rely on lawyers’ domain expertise to ensure the right tools are used for their legal matters with the best results. After all, clients hire lawyers because they want trusted experts representing them.
The August 2023 Thomson Reuters Future of Professionals Report surveyed lawyers and accounting professionals about their concerns and expectations around the role of AI in their work. One of the report’s primary conclusions was, “As an industry, the biggest investment we will make is trust.” Trust is the legal advisor’s stock in trade; tools and technology can enhance productivity and generate new types of service offerings, but in the end, it’s the legal expertise of human lawyers that creates value on which clients rely.
That layer of trust has several dimensions, including:
- Accuracy. Is the machine producing the correct result? Does the lawyer understand how the machine works and know how to evaluate its accuracy?
- Ethics. Is the lawyer accountable for the work product? Is the lawyer fully representing the client’s interests and using the best tools for the job?
- Security and confidentiality. Is the lawyer protecting client data?
- Value. Is the lawyer using sound judgment in choosing where automation creates value and where direct human interaction is more valuable?
Ultimately, the value that clients see in their outside firms’ legal work is not determined by the underlying tools; it’s found in the client’s confidence in their lawyers’ expertise in matching the right tool and methodology to the legal task and their lawyers’ accountability for the result.
By leveraging strong proprietary data sets, access to the same technology and industry data used by larger firms, and deep domain expertise coupled with the inherent trust that small law firms can build with their high-touch client relationships, small law firms can use GenAI to gain access to scale and competitive abilities unheard of before in the legal industry.