chevron-down Created with Sketch Beta.

Law Practice Today

March 2023

The Good, the Bad, and the Sloppy of AI Chatbots

Nicholas W Allard


  • Businesspeople, professionals, and educators who learn how to make good use of new advanced networked computer tools, while managing risks and mitigating potential misuse, will have a competitive advantage over those who ignore it.
  • Bots are designed to perform tasks more quickly, reliably and accurately than humans can, especially for jobs that are routine, repetitive and vast.
  • Lawyers should be open to all the new inventions that can improve every aspect of their career but recognize that there are no easy answers about how and when to adopt new technology.
The Good, the Bad, and the Sloppy of AI Chatbots

Jump to:

Mark Twain is credited with saying that history does not repeat itself, it rhymes. While the pace of technological change and the scope of its disruptive impact for both good and bad is larger than ever, in many respects the issues seem very familiar. The ABA’s Civil Rights and Social Justice Sections “AI and Economic Justice Project,” for example, resonates with the long history of U.S. regulation of universal telephone and common carrier service, requirements for basic cable television service, the efforts to minimize the so-called digital divide in the age of the internet, and even more recently efforts to mitigate the impact of flawed predictive algorithms that reinforce systemic racism. For example, these algorithms, fed with inaccurate and incomplete data, lead to the misidentification of people of color by facial recognition systems, patient referrals to health care services based on anticipated spending habits, and criminal sanctions correlated to race.

To its credit, the “AI and Economic Justice Project” is an important, timely initiative to study the impact of artificial intelligence (AI) on low-income and marginalized groups. It seeks to guide the ABA’s response to legal and regulatory issues posed by AI inventions that can have inequitable and unjust effects on less-advantaged people. Meanwhile, the U.S. Supreme Court, which Justice Elena Kagan recently described as “not the nine greatest experts on the internet,” is wrestling with how to address four cases involving AI algorithms. Two involve whether social media companies can be legally responsible for harms caused by their content, and two involve whether efforts by social media companies to avert harms can be blocked or censored. Even though at least one prominent Congressman, Rep. Don Beyer (D. VA), is enrolled in an AI master’s program at George Mason University, Congress has not yet begun to focus on much less tackle the complex policy issues raised by AI.

The truth is that AI inventions and humans have a lot to learn about each other. Both are imperfect. That irony explains the contradictory eruptions of interest, hype, and concern over chatbots like ChatGPT, a bot (short for robot) created and released by OpenAI only last November. ChatGPT uses technology known in geek speak as the Generative Pretrained Transformer. Most simply, think of it as a digital tool for writing prose. It is roughly comparable to using tools like a calculator for doing math, an online search engine for research, or word processing to ease composition, formatting, and revision of text.  At its current experimental stage, the ChatGPT writing bot predicts and suggests language ideally for short responses to prompts submitted by users. It operates by predicting upon request what the user wants to write, not unlike the way spell check and word processing function by anticipating desired spellings and words; and we all have experienced how that can be both useful and at times annoying, even embarrassing.

A bot is a software application programmed to perform tasks independently, and to simulate human activity by interacting with other computer systems and people without guidance. Bots are designed to perform tasks more quickly, reliably and accurately than humans can, especially for jobs that are routine, repetitive and vast.  AI applications only appear to engage in human thought.  They are not actually thinking. At least not yet.

Businesspeople, professionals, and educators who learn how to make good use of new advanced networked computer tools, while managing risks and mitigating potential misuse, will have a competitive advantage over those who ignore, postpone addressing, reject outright, or try to fight the inevitable widespread adoption of the relentlessly advancing technology.

Easier said than done. For example, historically, academics have been late adaptors. This was so in ancient Greece, when Socrates opposed students using paper and ink, and continuously since then up to our own time. Amazingly, for reasons Socrates would find familiar, many teachers and schools have already banned the use of ChatGPT (over concerns about cheating, interfering with teaching and learning, and fostering bad study habits). Baby boomers remember when the now-routine student use of calculators was suspect, and the digital natives of Gen Z still find many classrooms where teachers prohibit laptops.

Then too, throughout history humans have always feared and overreacted to their terrifying ungodly creations, whether fictional or real.  Think, for instance, of the Jewish folklore about the Golem monster, which even scared the Nazis; the pitchfork- and torch-bearing mob storming the castle in Shelley’s Frankenstein; Robert Louis Stevenson’s chemically conjured, frightening criminal Mr. Hyde; and the eerily creepy automated voice of Hal in 2001: A Space Odyssey blandly refusing to obey human direction, blandly saying, “I’m sorry Dave, I’m afraid I can’t do that.”

Lawyers and law schools are notoriously late adaptors, perhaps more so than any other learned profession, often for understandable and even commendable reasons. Those schooled in law give credence to weighing evidence before deciding. We value the probative value of give-and-take argument which also takes time to develop. We are comfortable with precedent and established practices and understand that change can be disruptive, have unintended unforeseen consequences, and be unfair, especially for those who are unable to adapt or who relied on the status quo. Indeed, that is the very thrust of the laudable ABA’s AI and Economic Justice Project.

In law schools and other academic settings, keeping up with a world where, paradoxically, the only constant is continuously accelerating change can be encumbered by outdated conventional wisdom and regulatory constraints, market realties and the unending repeating annual cycle of the academic calendar. That is, educators do not have the luxury of putting their schools in drydock to scrape off barnacles and retrofit the institution. Instead, innovation and experimentation necessarily can only be attempted while operationally underway, and involves navigating through complex internal governance, legal requirements, and the fiercely competitive, highly transparent, academic ecosystem that is acutely sensitive to unexpected external shocks which frequently occur.

Even so, despite the challenges we should push ourselves to consider prudent change and to pursue creative solutions to the age-old challenge of making the best, most creative use of new technology in business, professions and every level of education. Consider the classic worry that innovations like ChatGPT promote cheating by students. Why not seize the opportunity to focus conversations on campuses about the importance of personal responsibility for intellectual integrity and understanding that academic fraud is both wrong and self-destructive, because cheaters are depriving themselves of getting more out of their education. Besides, students and teachers will quickly learn that there are many ways to tell when writing comes from a bot, including using OpenAI’s own “AI Text Classifier” released only a few short months after the launch of ChatGPT.

Then too, surely teachers can incorporate ChatGPT-type applications into lessons about how to write better than bots and to critically analyze AI-generated information. History teaches us that concerns are overblown about being replaced by robots, or worse, terminated by Schwarzeneggian mechanical overlords. Afterall, there are unprogrammable elements of the human condition that are not done justice by algorithms. Flesh and blood writers in the end are irreplaceable because of humanity’s artful penchant for understanding, wisdom, judgment, purpose, abstraction, creativity, poetry, metaphor, unpredictability, love, friendship, compassion, empathy, joy, inspiration, dedication, sacrifice, surprise, error, sadness, grief, physical and mental pain, laziness, complacency, negligence, stubbornness, irrationality, deceit, meanness, bias, hatred, deviance, antisocial behavior, illness, death, and of course faith, hope, and charity. Our desires for relationships, liberty, privacy, and safety also make every one of us, because of not despite all our luminous foibles. We are more like Captain Kirk than Mr. Spock or IBM’s Watson.

Technology-driven change – we hope for the better – is relentlessly inevitable. We should be open to all the exciting things new inventions can do to improve every aspect of our lives, but recognize that there are no easy answers about how and when to adopt new technology. ChatGPT, for example, was released while still under development. It is prone to repetition and irrelevant content, and can lead to unintended readily discoverable plagiarism given how it uses massive databases of published material. It is not yet particularly useful for many workplace communications involving teamwork, evaluations and nuanced personal conversations. What should worry us is that it will not improve and we might not learn how to use it well to serve our very human purposes. To be or not to be early adaptors and improvers of artificial intelligence. That is a time-worn question.