- Why do AI tools frequently underwhelm in the trenches, especially in the legal profession?
- GPT-3 ‘s ability to produce sophisticated documents is at the heart of its distinction from many other AI tools and could be a game-changer for legal organizations.
One doesn’t have to dig too deep into legal organizations to find AI skeptics. AI is getting tremendous attention and significant venture capital, but AI tools frequently underwhelm in the trenches. Here are a few reasons why that is and why I believe GPT-3, a beta version of which was recently released by the OpenAI Foundation, might be a game-changer in legal and other knowledge-focused organizations.
GPT-3 is getting a lot of oxygen lately because of its size, scope, and capabilities. However, it should be recognized that a significant amount of that attention is due to its association with Elon Musk. The OpenAI Foundation, which created GPT-3, was founded by heavy hitters Musk and Sam Altman and is supported by Mark Benioff, Peter Thiel, and Microsoft, among others. Arthur C. Clarke once observed that great innovations happen after everyone stops laughing. Musk has made the world stop laughing in so many ambitious areas that the world is inclined to give a project in which he’s had a hand a second look. GPT-3 is getting the benefit of that spotlight. I suggest, however, that the attention might be warranted on its merits.
WHY SOME AI-BASED TOOLS HAVE STRUGGLED IN THE LEGAL PROFESSION AND HOW GPT-3 MIGHT BE DIFFERENT
1. Not Every Problem Is a Nail
It is said that when you’re a hammer, every problem is a nail. The networks and algorithms that power AI are quite good at drawing correlations across enormous data sets that would not be obvious to humans. One of my favorite examples of this is a loan-underwriting AI that determined that the charge level of the battery on your phone at the time of application is correlated to your underwriting risk. Who knows why that is? A human would not have surmised that connection. Those things are not rationally related, just statistically related.
This capability makes AI tools good at grouping like things together to facilitate users finding them based upon revealed correlations. Consequently, many AI applications are some variant of finding stuff better. It is what they do well. However, “finding stuff” is not a first-order problem in legal organizations. It is merely a means to an end.
The “end” in legal organizations is a document of some kind. Documents are their widget—the thing legal teams build. Finding information that is relevant to creating a document is helpful. Actually producing that document, though, is far more helpful.
Producing documents, it turns out, is something GPT-3 does very well. That is at the heart of its distinction from many other AI tools - its ability to produce sophisticated documents. At its core, GPT-3 is a text-prediction engine. It is designed to accept as input a string of text and from that input predict, from a statistical analysis of everything it has ingested, what text should come next. That process can be repeated recursively, so from a simple text string an entire document can be generated.
It does this through statistics and algebra, more or less. GPT-3 has read, essentially, everything—at least all substantive publicly available documents in huge portions of the internet, which at this point in history represents a material segment of all expressed human knowledge. Accordingly, it can predict, given some input of text, what text is statistically likely to come next. You can feed it a few lines, and it predicts the next. Moreover, early testers claim that you can instruct GPT-3 to write in a certain voice. Your document can be created in the voice of Hemingway, Shakespeare, or Barack Obama. Pretty cool stuff.
I think this is the breakthrough for legal organizations. GPT-3 isn’t just finding stuff for you; GPT-3 is making stuff for you. Certainly, other AI products add value—it’s not trivial that we have something that makes wrenches—but it’s another thing entirely that if you sell cars, you have something that makes a car.
2. With Data Sets, Sometimes the Juice Isn’t Worth the Squeeze
Most enterprises that have implemented AI tools confront the training dataset problem. Algorithms that were designed with enormous datasets depend upon such large datasets in operation. When such tools come out of the lab and into the enterprise, assembling the appropriate dataset is often a gating factor.
The issue in legal organizations is one of scale and effort. The volume of documents in most legal organizations, even large ones, are nowhere near the numbers for which AI tools were designed. In addition, vetting and assembling such datasets and authenticating a product’s performance after training on such collections can be extremely time consuming. In areas such as contract intelligence, tools that are trained on large, publicly available data, such as the SEC’s EDGAR database, can be an exception to this problem. These tend to work out-of-the-box on an organization’s contracts, which tend to be similar to the large public dataset. However, absent this predelivery training, it is frequently found that creating and monitoring the dataset is a bar to success in an organization.
GPT-3, however, has been pretrained on billions of substantive documents from large collections of publicly available documents. Given that GPT-3 is pretrained with a vast dataset, it is functional out-of-the-box for the purpose of generating documents. Early research suggests that it can be hyper-tuned on an organization’s own data, but it doesn’t have to be. This solves the primary challenge for business users in getting out of the gate with some AI tools.
3. Thinking Isn’t as Important as Doing
One criticism that has been levied upon GPT-3 is that it does not “reason” as humans do, so on occasion its output is absurd. That’s an accurate criticism, and the public conversation about GPT-3 is not short of humorous examples.
GTP-3 is a statistical engine, without the reasoning ability of humans or the yet-to-be-created “strong AI.” People frequently ask whether an AI will pass the Turing Test (meaning would it fool a human into thinking he or she were interacting with another human). Although that is a useful shorthand for measuring an AI’s reasoning ability compared to humans, it doesn’t say much about its usefulness. In knowledge organizations where creating documents is a central activity, usefulness is judged by a tool’s ability to do that task, not its ability to fool someone about the source. For that purpose, GPT-3 appears to be well-suited. Although the absurd output that GPT-3 sometimes creates can be fun to see, the wrong turns are pretty obvious and unlikely to escape even cursory review. Most of us can probably point to some pretty absurd output from humans, too, but it’s hardly a reason to dismiss them as participants in the ecosystem.
What GPT-3 does is create stuff, rapidly, based upon a significant chunk of human knowledge. For all that doing, perhaps its lack of thinking can be forgiven.
APPLICATIONS FOR GPT-3 WORTH EXPLORING IN LEGAL ORGANIZATIONS
Given that GPT-3 is good at generating documents, it’s easy to imagine applications of this technology in legal organizations. Almost any task that is document-oriented (except, presumably, those where the unique facts overwhelm all other aspects of the document) are good candidates. Here are a few that we will be exploring with our corporate legal department clients, which I suspect are representative of those that will make sense for other organizations:
- Powering intake systems. In our work with corporate legal departments, a common problem is managing the interaction between business units and the legal department. The requester wants prompt, accurate help. The legal department wants to provide that help while complying with headcount and bandwidth constraints. One aspect of the intake process can be providing immediate answers to common questions. GPT-3 can be part of that solution by providing contextual answers (rather than selecting from an inventory of stock answers like typical bots) as well as creating first drafts of documents. GPT-3’s ability to create answers and documents on the fly can enable chat and intake systems that users find useful rather than off-putting.
- Document creation. Creating first-pass documents based on what has been done before is not only powerful, it’s what humans already do. Early in my legal practice, a senior partner once told me, “We created a set of documents in the Garden of Eden and have just been modifying them ever since.” GPT-3’s garden is much larger; it can include your collection plus everything else. In addition, it analyzes 175 billion parameters, so it makes statistically valid decisions incredibly fast. One can imagine GPT-3 being part of the process that creates initial drafts of legal memoranda, contracts, policy manuals, HR documents, RFPs, and audit responses, among other things commonly created by finding and patching together prior versions of these documents by people.
WHERE WE GO FROM HERE
GPT-3 is not the only helpful AI tool at our disposal; however, it does represent a transition from making the raw materials of end products to making end products themselves. For legal organizations and other knowledge workers, that is a material change. In addition, the enormous dataset upon which it is pretrained removes one of the barriers to experimentation and implementation. GPT-3 has its limits, but frequently the first limit is our imaginations. In the case of GPT-3, stretching our imaginations might serve us well.
My company specializes in understanding what business capabilities can be enabled by all the cool new tech coming into the world. We believe that GPT-3 provides some new tools of value in a legal department’s arsenal and will be focused on assessing practical, impactful solutions, hopefully making better legal organizations in the process—once the world stops laughing, of course.