First Colin, let’s start with the basics. What is AI?
Artificial intelligence is an umbrella term referring to many different technologies in computers or machines that can mimic the cognitive functions associated with human intelligence. This can include responding to spoken or written language, analyzing data, making recommendations, seeing things in a picture, etc.
As many of our readers have tried ChatGPT, can you tell us a bit about how it works?
Sure. Let’s start by breaking down the terms and looking at the supporting technology. The GPT in ChatGPT stands for generative pre-trained transformer and is an example of a foundational large language model (LLM). The chat in ChatGPT is its novel aspect: it has a simple “chat” interface that allows humans to ask questions and receive answers in a conversational way from an LLM. The input text can be a question, a set of instructions or even a document. When given a question or prompt, ChatGPT follows the instruction and provides a detailed response that reads like it was written by a human.
But don’t be fooled. ChatGPT doesn’t actually understand its own answer in the way a human does, nor did it create a response from a database of prepared and verified answers for that particular question. The “language” part of LLM means its focus is on organizing, summarizing and predicting word patterns to generate a response. LLMs spot patterns in how words, phrases, sentences and even paragraphs relate to each other, and then makes statistical predictions about what words should come next. The “large” in large language models comes from the fact that the LLMs behind tools like ChatGPT train themselves on collections of hundreds of billions of words from across the open internet and other sources (100 billion = 100,000,000,000). Looking at such large data sets allows LLMs to give answers that have real facts and sound totally plausible.
There are larger LLMs, but there are also many much smaller models (single-digit billions) developed in academic, commercial and open-source communities. By training an LLM with more specialized data you can achieve a higher performance at a much lower development cost. Law-specific LLMs will be a source of innovation in the legal arena and will give us tools that will be more reliable.
Research and writing are two of the most common tasks undertaken by lawyers and staff in a law office. Can you give us some examples of how a tool like ChatGPT could help with research?
As a starting point, we should always keep in mind OpenAI’s caution, posted prominently just below the query box: “ChatGPT may produce inaccurate information about people, places, or facts.” Attorneys should pay heed to this warning and recognize that ChatGPT is not a totally reliable research tool. However, understanding ChatGPT’s limitations is key to understanding where and how it and similar tools can be used in a law-office setting.
While it is clear legal information was a part of ChatGPT’s training data, remember that it’s not a database of verified facts. ChatGPT has no ability to understand or assess whether its answers are accurate and complete. A correct answer to a question about a particular point of law or case is an achievement in statistics, not one of legal reasoning or diligent fact checking.
Remember, LLMs spot patterns in how words and phrases relate to each other, and then make predictions about what words should come next. And as the legal domain involves a lot of very particular and often repetitive phrases within contracts, pleadings, legislation, case law, judgements and other legal documents or information, ChatGPT’s strengths can prove quite useful for finding general definitions or explanations of broad legal principles. Think textbook, not casebook, when asking questions. For example, you may find its explanation of the principle of laches quite useful, but relying on its suggestions of leading Second Circuit jurisprudence could put you in peril. A case citation could refer to a real decision, or the citation could be entirely made up but look very real as the LLM output will match the format of a citation to a real case. ChatGPT actually has some randomness built into its responses. This is why it gives a slightly different response if you ask the same question twice. Random answers make for more interesting reading, but they aren’t welcome from a malpractice point of view.
Can you give us some examples of how a tool like ChatGPT could help with writing?
There are a few main ways you might approach ChatGPT for help with writing. For example, as your first-draft author, as your editor, as your assistant or as your adversary. In any scenario, you must still remain vigilant.
As a first-draft author, you might prompt it as follows: “You are a senior trial attorney addressing opposing counsel. You use very short sentences and are fond of section headings. Write a letter proposing terms of settlement based on the following: [list the relevant facts, arguments and desired outcome].”
When relying on it as an editor, you might prompt it as follows: “You are an articulate attorney with an impressive vocabulary. You have a concise writing style. Rewrite the following in fewer than 300 words: [paste your first draft].”
As an assistant, you may feed it content alongside prompts like “summarize this” or “create a three-column table of all the names in this document, their affiliation and their position on ABC” or “for each argument in this document, suggest a counter argument.”
Finally, as your adversary, you can ask it to find the holes in your logic, unsupported statements and counter arguments to your main points.
In these examples, you will observe that the questions are far more detailed than the simple Google search queries we all now do without a second thought. With ChatGPT the way you ask for and provide direction on the response you want (e.g., the style of writing, the document format, the word length, etc.) will have a significant impact on the results you receive. It will take some practice, but learning how to query ChatGPT is a worthwhile investment of your time as it will give you better (but never totally reliable) responses.
What about confidentiality and privacy concerns?
ChatGPT raises many ethical questions for attorneys using this new and evolving technology. Thus, it’s worth approaching ChatGPT and LLMs with some caution and adhering to core principles around the use of confidential client information. Many LLMs analyze the queries they get to “learn” and improve their statistical predictions. The folks behind ChatGPT have been responsive to confidentiality concerns and have made it easier to opt out of having your sessions and engagements tracked and available for use in their training models. Nonetheless, many firms are playing it safe and simply not permitting their teams to let client confidential information get anywhere near it.
So Colin, what is the bottom line on ChatGPT?
It is a transformative, general use technology tool attorneys can use now to harness the benefits of AI and LLMs in their practices. But truly, this is just the beginning as new LLM tools designed specifically for the legal industry are already coming to the market. With improved features these tools will address the limitations and concerns of ChatGPT. And LLMs are in your future on many fronts as many of the ChatGPT-like features will be integrated into your Microsoft Office or Google workspace suites, search engines like Google and Bing, and even your smartphone within the next year or two.
Thanks for your thoughts, Colin. Clearly attorneys that want to future proof themselves should take time to understand how they can use ChatGPT and similar AI tools in their practices today. While ChatGPT won’t be replacing you soon, it presents an incredible opportunity for attorneys to enhance and redefine where they add value and can be more efficient at some tasks, and in particular the first draft of documents.