Large language models such as ChatGPT are all the rage these days. A lot of commentators, legal professionals, lawyers and media outlets, including this podcast, have spent a lot of time examining this game-changing technology.
This isn’t the first time that a promising piece of legal technology upended the legal industry. When technologically assisted review first started gaining traction in e-discovery in the 2010s, many of the same superlatives assigned to ChatGPT were used to describe this groundbreaking new process that purported to review documents faster and more accurately than humans. Lawyers would get hours and hours of time back, and clients would save tons of money.
But then a funny thing happened. Lawyers were reluctant to fully embrace it, citing concerns with the technology or the possibility that a court might punish them for using a new tool that hadn’t been widely accepted by the legal industry. Even today, many lawyers and law firms still rely on traditional methods of conducting e-discovery—armies of contract attorneys sifting through documents one at a time.
In that vein, large language models have already been more wholeheartedly embraced by lawyers and legal professionals than technologically assisted review. However, there have also been a lot of hiccups and problems with the technology—between false case citations and made up information, it’s clear that this technology still has a ways to go.
In this episode of the Legal Rebels Podcast, e-discovery pioneer John Tredennick talks to the ABA Journal’s Victor Li about what it was like when technologically assisted review first came out, how it compares to the reception that ChatGPT got, and how large language models are affecting keyword searches.