The artificial intelligence–powered software ChatGPT has amassed users at a faster rate than any other online platform. After ChatGPT launched in late 2022, it amassed 100 million users within two months. Compare that to TikTok, which took nine months to reach 100 million. Instagram took two and a half years to reach the 100 million–user mark, and Facebook took nearly four years to do so. ChatGPT is a software program that responds to written requests as if it were a supersmart human being. It can produce a written essay on infinite topics in the style you direct and respond to specific inquiries with computer speed. It’s what techies call a chatbot.
When one user asked ChatGPT to explain what it is, it described itself as “an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. ChatGPT uses deep learning techniques to generate human-like responses to text inputs in a conversational(Note: “AI” is today’s acronym for “Artificial Intelligence.”)
A complaint often heard about ChatGPT is that it will enable students to cheat when writing their school essays. Closer to home, a judge in Colombia used ChatGPT to issue his decision in a medical rights case for a child diagnosed withAlthough I am writing this article about my experience with Open AI’s ChatGPT, other chatbots perform similar functions. Included among these are Bing’s AI chatbot and Google’s AI BARD. A general criticism of AI chatbots is that the software occasionally gives responses that reinforce, amplify, and perpetuate existing biases and prejudices. In fact, there are cautionary notes on ChatGPT’s webpage that the software may “occasionally produce harmful information or biased content.” These cautionary comments should be considered when using ChatGPT or one of its competitors.
News reports and press releases say that ChatGPT is programmed to resist commenting on sensitive topics. I also learned from other reports and ChatGPT’s FAQ page that this unique artificial intelligence software can fabricate authoritative-sounding answers with inaccurate or incomplete information. To emphasize this cautionary warning, ChatGPT states on its FAQ page that it “will occasionally make up facts,” “hallucinate outputs” (whatever that means), orThe FAQ page also recommends that users report whether responses from ChatGPT are accurate and requests feedback by using the “Thumbs Down” icon if a response is unsatisfactory or inaccurate. This crowdsourcing evaluation of ChatGPT is intended to assist the developer’s tweaks of their product and the chatbot’s own machine learning capabilities.
The overwhelming volume of news articles about ChatGPT finally got my attention. I had to satisfy my curiosity by trying it out. I had no thought of writing this article until . . . .
I was working on a speech for a non-law-related audience when I tried ChatGPT. First, I asked it to tell me a joke about the Chinese balloon shot down over the United States. The essence of the response I received from ChatGPT was that it did not discuss politically sensitive topics. Okay, I can accept that. Next, I asked it to give me some ideas for my chosen speech topic. The ideas weren’t bad, but they were so banal that they convinced me to choose another topic. I then had another idea for a question to ask ChatGPT, which is why I am writing this column. I have often asked audiences at legal technology conferences whether there should be a requirement for judges to keep up with and understand changes in technology. I did not ask that question of ChatGPT because I assumed an artificial–intelligence-enhanced computer program would respond, “Of course, judges should be required to keep up with and understand changes in technology.” I will leave it to readers of this column to ask ChatGPT or any AI chatbot competitor about its opinion on that topic. Instead, I asked the chatbot a more general question to test its research capability and knowledge. My question and ChatGPT’s response are noted below.
Dixon’s Question to ChatGPT:
Is there a provision in the Model Code of Judicial Conduct that requires judges to keep up with and understand changes in technology?
ChatGPT’s Response to Dixon:
The Model Code of Judicial Conduct is a set of ethical guidelines for judges, which was created by the American Bar Association (ABA). While the Model Code does not contain a specific provision that requires judges to keep up with and understand changes in technology, it does include general language that encourages judges to stay informed about changes in society and to perform their duties with competence and diligence.
Specifically, Rule 2.3 of the Model Code states that “A judge shall perform the duties of judicial office impartially, competently, and diligently.” Comment  to that rule explains that judges must “remain informed about the law, court rules, procedures, and management techniques necessary to perform the duties of judicial office competently.” While technology is not explicitly mentioned in this comment, it is generally understood to be an important part of modern legal practice, and judges are expected to be able to use technology to manage their caseloads and communicate with parties and attorneys.
Furthermore, in 2019, the ABA updated the Model Rule 1.1, which concerns a lawyer’s duty of competence. The amendment made it clear that a lawyer has a duty to keep up with the changes in technology that impact the practice of law, and that failure to do so may constitute a violation of ethical rules. Although this rule applies to lawyers, not judges, it reflects a growing recognition within the legal profession of the importance of understanding technology in legal practice.
In summary, while there is no specific provision in the Model Code of Judicial Conduct that requires judges to keep up with and understand changes in technology, judges are expected to perform their duties competently and diligently, which includes staying informed about changes in society and the law, including changes related to technology.
I was pleasantly surprised that ChatGPT referenced the American Bar Association because my question did not identify the ABA. When I shared my question and the response with Minnesota Court of Appeals Judge Peter M. Reyes Jr., chair of the ABA Judicial Division’s Ethics and Professionalism Committee, he quickly pointed out that the chatbot incorrectly referenced Model Rule 2.3 because the language closest to the cited language that “A judge shall perform the duties of judicial office impartially, competently, and diligently” is found in Rule 2.5(A), not Rule 2.3. Rule 2.5(A) actually states, “A judge shall perform judicial and administrative duties, competently and diligently.” Additionally, the quoted language that the chatbot attributed to Comment  does not appear in the Comments to either Rule 2.3 or 2.5. Following that heads-up from Judge Reyes, I noted that the chatbot’s reference to Model Rule 1.1 is misleading. The amendment making it clear that a lawyer has a duty to keep up with the changes in technology that impact the practice of law is in Comment  to Rule 1.1 of the ABA Model Rules of Professional Conduct, not the Model Code of Judicial Conduct (a minor detail, I guess). Still, I was unsure if I should classify these missed details as “making up facts” or “hallucinating an output.”
I was intrigued by the quotation that ChatGPT attributed to Comment , that judges must “remain informed about the law, court rules, procedures, and management techniques necessary to perform the duties of judicial office competently.” The quotation has an authoritative sound. After a diligent search, I can say with confidence that the quoted passage does not appear in the Model Code of Judicial Conduct or the Model Rules of Professional Conduct.
With the assistance of a law clerk and a law librarian, I’ve exhausted Google and Bing Internet search engines, LexisNexis, Westlaw, and DuckDuckGo, among other sites, including cases, secondary materials, statutes, codes, newspapers, magazines, and administrative materials. I was unable to find the source for that quotation. Out of frustration, I asked ChatGPT several times, using slightly different wording, to identify the source of the quote. I received an error message on each occasion. I even used plagiarism software to find the source of the quote. In every instance, I came up empty with no identified source for the quote. I spent more time trying to track down the source of that quote than writing this article.
Various Judicial Codes have similar language that a judge must act with impartiality so they may perform the functions of their judicial office competently; however, nothing I located comes close to the quote. I do not know if ChatGPT aggregated the so-called quote from various Judicial Codes, fabricated it, made up facts, or hallucinated its response. I have not found the quoted language in any public online database. I welcome any reader’s assistance who is inclined to continue the search for the quoted language attributed to Comment .
Finally, ChatGPT’s response that judges have an obligation to keep up with and understand changes in technology could have been made stronger by citing additional rules of the Model Code of Judicial Conduct, but that omission makes my point. The chatbot, ChatGPT, notwithstanding its “artificial intelligence” label, is merely a tool to be used wisely and carefully. Appropriate human discretion, fact-checking, and cite-checking are still required.
Society and the legal profession’s use of artificial intelligence is not new. As I type this article, my word processing program is offering spelling corrections and suggesting the next word I should type. That is artificial intelligence software at work. When I use Face ID or Touch ID to unlock my phone, that’s artificial intelligence at work. When we use GPS in our travels to a new destination or to help avoid traffic congestion or find a faster route, that’s artificial intelligence at work. When we chat online with the Help Desk or customer service, we are usually not communicating with a real person but with an artificial intelligence chatbot. Or how about when you are doing routine Internet surfing and are suddenly presented with digital advertisements for products or services similar to what you have recently researched? That’s also artificial intelligence at work. How about when you ask Siri to call Mom or Alexa to tell you the weather forecast for Washington, DC? That’s also artificial intelligence at work. And the Internet search engines and legal research tools that members of the legal profession regularly use, and that I used to search for the phantom quote—those are also artificial intelligence at work.
Artificial intelligence chatbots can produce authoritative-sounding research or convincing essays with seeming ease. According to one commentator, chatbots are notorious for issuing “coherent nonsense”—language that sounds authoritative but is actuallyI agree! Users must exercise the same caution with chatbot responses as when doing Internet research, seeking recommendations on social media, or reading a breaking news post from some unfamiliar person or news outlet. Don’t trust; verify before you pass along the output.
After the draft of this article was submitted for publication, before this magazine went to print, a lawyer asked ChatGPT to generate a list of legal scholars accused of sexual harassment. ChatGPT’s response included a law professor/cable news commentator, saying he made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post. According to The Washington Post, however, no such article existed; there had never been a class trip to Alaska; and the law professor/cable news commentator said he has never been accused of harassing aAs I said above, don’t blindly trust a chatbot’s response. Verify the information before you pass it along!