For some academics, researching, writing, editing and publishing a scholarly piece of work can take months, if not years, of painstaking effort, diligent commitment and rage-inducing frustration. In December, Andrew Perlman, the dean of the Suffolk University Law School and the inaugural chair of the governing council of the ABA Center for Innovation, authored one in less time than it takes to watch an episode of the Game of Thrones prequel series House of the Dragon.
To be fair, Perlman had some help. Released Nov. 30, ChatGPT, a chatbot created by OpenAI and “is fine-tuned from a model in the GPT-3.5 series,” has made waves in a short amount of time for how responsive, sophisticated and realistic it is. ChatGPT can write a Shakespearean-style sonnet about whatever theme a user chooses, tell jokes and answer questions.
And it can help people write book reports, business reviews and academic papers. Perlman noted in a Dec. 5 paper, which is titled, “The Implications of OpenAI’s Assistant for Legal Services and Society,” that all he had to do was ask ChatGPT some questions and then publish the responses. He noted that the technology was not perfect, and at times, it was even problematic.
Nevertheless, it demonstrated the potential of artificial intelligence—especially when it comes to helping perform legal tasks. ChatGPT could be an upgrade over existing tools used by pro se litigants to answer questions, generate forms and file papers with a court. It could also do work currently performed by lawyers, such as conducting legal research and writing briefs. So should lawyers welcome this technology? Or should they fear it?
In this episode of the Legal Rebels Podcast, Perlman spoke with the ABA Journal’s Victor Li about the possibilities of ChatGPT to bridge the access to justice gap, help lawyers work more efficiently, and change the way that students learn about the law. He also talked about potential pitfalls and what ChatGPT users should be careful of.