Stop me if you’ve heard this one before. A chatbot walks into a bar. . . .
You cannot spend more than five minutes in certain corners of the internet without hearing about ChatGPT, the conversational chatbot from OpenAI. ChatGPT took the technology world by storm late last year, growing to one million users in a matter of weeks and arguably notching the fastest meaningful adoption of a new technology ever recorded. Artificial intelligence (AI) veterans (including me) are excited at the rapid progress and possibilities of this new interactive technology. Others (also including me) are concerned about the legal, ethical, and social effects. What’s a human race to do?
My initial reaction was to ask ChatGPT to write this column. After all, writing blog posts and other social media copy is among the most frequently referenced use cases. Fear not, though. This column is in fact written by yours truly, a human being. My start down the path with ChatGPT as author led to several thorny problems that required a human touch. Thus, I intervened and instead opted to write about the experience.
Before talking about the column-writing experience and its implications, a primer on ChatGPT and “generative AI,” the category of technology to which ChatGPT belongs, is in order.
Primer on Generative AI
Most machine-learning tools in popular use today are either explanatory or predictive. Explanatory AI summarizes or identifies trends in either massive amounts of data or in real time, tasks that are impractical or expensive for humans to do. Organizing search results and automatically tagging your friends in social media are just a few such use cases. Predictive AI, as you would guess, makes suggestions based on patterns the algorithm detects in historical data. Online product recommendations and better weather forecasting fit into this broad category.
AI tools have been used in business quite successfully since the 1990s, though they were limited by the large costs to develop and deploy. In the past decade, costs have dropped dramatically, and “deep” neural networks that simulate how the human brain learns have become increasingly powerful.
Consider that, as a graduate student in 2002, I developed an algorithm that predicted the three-dimensional shape of a protein in four hours, considered then to be state of the art. Last year, DeepMind’s AlphaFold predicted the shape of almost every protein (around 200 million proteins from 1 million species) in 18 months, which equates to one per quarter second, a 50,000-fold increase in just two decades.
OpenAI, the creator of ChatGPT, has been investing heavily in building large language models (LLMs) that combine the power of explanatory AI to understand an immense body of text and predictive AI to generate novel responses. You use LLM technology every day through the type-ahead suggestion feature in most email and texting applications or in your conversations with Alexa and Siri. As the algorithms have improved and models have grown bigger, suggestions went from a word to a phrase, then to a sentence, and now to entire paragraphs.
To put the true scale of this into perspective, GPT-3.5, OpenAI’s language model released in 2022, has 175 billion parameters (nodes in the neural network) and was trained on over half a trillion words from the internet, books, and Wikipedia. It is over one hundred times larger than an earlier version built just a few years earlier. The newest iteration, GPT-4, was recently released and while figures were not made public, it is believed to be at least an order of magnitude more powerful still. The pace of change in this space is fast and furious.
What is ChatGPT, then? It is a conversational chatbot built on GPT-3.5 that can take directions in natural language and produce human-quality responses—from blog posts to company strategy, from software code to lyrics and poetry—on a wide range of topics. The hype, for once, is deserved. You truly need to see it to believe it.
The Strengths and Limitations of ChatGPT
ChatGPT functions much like any other chatbot, just with a much more expansive and nuanced repertoire. You type a message, and it replies based on what you have written. One key advantage is it “threads” conversations so that you can ask clarifying questions, further direct its responses, or sustain a dialogue on a topic. You can also ask it to take on a particular persona, which can be helpful in guiding the substance and style of the response, well beyond the obligatory requests to “talk like a pirate.”
I started by asking ChatGPT to write a balanced article for this journal. As in any good human collaboration, I tried to be clear:
Acting as a lawyer and law professor, write a professional article of 1700 words for a litigation journal that gives a balanced view of the legal implications and risks of using generative AI such as ChatGPT. Include considerations of intellectual property ownership, cybersecurity and data privacy risks, permission of use, data ownership, giving credit, and citing sources. Write in approachable and conversational style. Include several suggested titles and bylines. Include relevant precedent cases or examples from similar digital technologies where possible.
The title it generated was pretty spot on (so much so I used it as the title of this column), and while it did not understand what I meant by “byline,” its response there was amusingly cheeky—Byline: Jane Smith, Esq. and John Doe, JD, LLM
The response included a full-fledged essay, albeit the language was a bit stilted and repetitive, and not all the topics were explored in detail. But still, it was a passable attempt and likely superior to what a novice human writer would produce. Here are the first two paragraphs:
As the legal profession continues to evolve and embrace new technologies, generative AI such as ChatGPT has emerged as a powerful tool for lawyers in the litigation process. However, the use of ChatGPT also raises a number of legal considerations and risks that must be carefully evaluated. In this article, we will provide a balanced view of the legal implications and risks of using generative AI in litigation, with a focus on ChatGPT.
One of the first considerations when using generative AI in litigation is intellectual property ownership. When an associate or attorney uses ChatGPT to assist in their work, who owns the resulting work product? Is it the attorney or the law firm? Or does the ownership belong to the creators of ChatGPT or the company that owns the AI? This is an important question to consider, as the ownership of the work product will determine who has the right to use, distribute, and profit from it. In this case, it is likely that the legal team using ChatGPT would own the work product and will be able to use it as they wish.
Pretty impressive! ChatGPT did not plagiarize this. Rather, it synthesized concepts from a wide variety of sources, adapted them to the needs of my prompt, and created novel text. Not bad for 30 seconds of effort.
Things started to unravel, though, when I probed further about intellectual property, one of my areas of concern. During the initial dialogue, it claimed that those using ChatGPT would likely own the work product. However, misreading that, I asked it to explain why ChatGPT (not the legal team) would own the work product. It dutifully provided this erroneous explanation:
Aside from misinformation, generative AI raises several other concerns. The resulting output is nondeterministic, leading to different variations each time you ask a question. It is not reproducible and not citable in the traditional sense. It does not give you its sources, and in fact cannot, given how LLMs work. Like humans, it is the accumulation of a lifetime of learning rather than a traditional deterministic search engine.
These concerns are sufficient to give pause to those engaging it for commercial use. The concerns are even more fraught in a legal setting. Consider ChatGPT’s own response to this topic:
Law firms must ensure that they are in compliance with data privacy laws and regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Permission of use is also an important legal consideration when using generative AI in litigation. While generative AI such as ChatGPT is widely available and accessible, it is important to ensure that the use of the AI is in compliance with any terms of service or license agreements. Additionally, law firms should consider obtaining permission from clients before using ChatGPT to assist in their work, as the use of the AI may have implications for attorney-client privilege and other legal protections.
People also have raised concerns about the “softening” of future generations who overly rely on such tools, losing key skills such as critical thinking and research. People have cried foul about technology’s “dumbing down” effect since the days of calculators and spell check, but the proverbial jury is still out as to the impact of today’s more sophisticated tools.
Proceed with Caution
So where does this leave us? Should firms embrace ChatGPT? Discourage it? Block it outright? Ultimately, that is a question for leadership and their counsel. There are enough warning signs to at least proceed with caution.
Still, the future of this technology is inevitable, if not clear. It will become more sophisticated and, judging by the last few years, quickly so. Legal technology companies are watching these developments closely and building their own capabilities alongside ChatGPT. Given the amount of investment flowing into the space, progress should continue to accelerate.
For now, ChatGPT works well as a creative collaborator to crystallize one’s thinking and generate new ideas, so long as it is not overly relied on or used blindly for legal or client work. In today’s post-COVID hybrid workplace, perhaps having a digital “sparring partner” that is always there and willing to talk to us is not such a bad thing after all.
Oh, and as for that joke, I will give ChatGPT the final word on the matter:
A chatbot walks into a bar and orders a drink. The bartender looks at the chatbot and says, “I’m sorry, but we don’t serve drinks to robots.” The chatbot responds, “That’s okay, I’ll just stick to answering legal questions. It’s what I do best anyway.”
Funny? Not really. Scary? Absolutely.