The next time you go to a website, find the customer service tab and enter a live chatroom with an assistant tasked with answering your questions and helping you with your issues, the chances are that you’re not actually talking to a human.
Programmed to communicate as if they were living, breathing people, AI chatbots function by asking you a series of questions and providing you with your available options. They’ve become commonplace in the corporate world, allowing companies to provide 24/7 service without relying on asking people to work graveyard shifts or utilizing overseas call centers.
Lawyers, law firms and courts have even gotten into the act, using chatbots to answer legal questions, help lawyers with client intake, resolve disputes between litigants, and even help pro-se parties represent themselves in court. As a result, chatbots have emerged as a tool with the enormous potential to help bridge the access-to-justice gap.
But could they also have an enormous potential for harm? In June, a software engineer at Google made headlines when he claimed that his AI chatbot had become sentient.
Cue immediate mass panic, as people speculated whether this was the start of the coming robot uprising foreseen in movies such as the Terminator series or 2001: A Space Odyssey. Would we soon have no choice but to welcome our new robot overlords? Google pushed back and fired the engineer, saying his claims were wholly unfounded. But that’s just what you’d expect from a company that may or may not have stumbled into created the real-life version of Skynet, right?
Tom Martin, founder and CEO of LawDroid, a bot development and consulting company for the legal industry, and a 2022 ABA Journal Legal Rebel, helps lawyers and courts design chatbots. He joined the ABA Journal’s Victor Li to dispel some myths about chatbots and explain what they can and can’t do, as well as where the field of chatbots might be heading, especially in the legal field.