chevron-down Created with Sketch Beta.
April 23, 2024 Technology Column

Ethical Rules to Consider When Using Generative Artificial Intelligence as a Judge

Hon. W. Kearse McGill, Los Angeles, CA

The introduction of generative artificial intelligence (AI) is probably the most impactful technological development to occur in the legal profession since personal computer systems were introduced into law offices and the courts beginning in the late 1970s and early 1980s.  Generative AI is much different from other types of earlier AI technology, such as predictive typing or asking Siri or Alexa to find a lasagna recipe.  Generative AI has the ability to create new content, and it would seem to be a perfect tool for those who work in the law, which is a profession that is heavily dependent on gathering information to do its work and where writing is the primary product.

This attraction to generative AI’s perceived utility is especially understandable for judges, where time is often a luxury for those of us who have busy court dockets and mandated time requirements to issue our decisions and orders.  However, generative AI is still in its infancy, and its use has demonstrated that real problems can occur, such as the “hallucination” of cases cited in legal briefs by attorneys that have been submitted to courts; therefore, care must be taken when utilizing this new tool.  The National Center for State Courts (NCSC), through its recently created AI Rapid Response Team, has begun to venture into this area and acting as a clearinghouse for this issue.  As the NCSC points out, AI technologies, including generative AI, can improve court operations, but these technologies cannot replace judges and guardrails must be in place to make sure that such technologies are used ethically.  While a number of states are beginning to have discussions on the appropriate use of generative AI by judges, a few states (i.e., West Virginia, Michigan, and New Jersey) have recently provided initial guidance. 

What ethical considerations should we as judges take into account when looking to use generative AI in assisting us with our work?  Initially, we can consider how the ABA’s Model Code of Judicial Conduct (MCJC) can guide us in the appropriate use of this technology.  While not adopted everywhere in the United States, the relevant MCJC rules that are discussed below contain concepts that are generally found in all applicable rules governing judicial behavior, and multiple MCJC rules would appear to apply to a judge’s use of generative AI:

  • MCJC rule 1.2: “A judge shall act … in a manner that promotes public confidence in the independence, integrity, and impartiality of the judiciary ….” 
  • MCJC rule 2.2: “A judge shall uphold and apply the law and shall perform all duties of judicial office fairly and impartially.” 
  • MCJC rule 2.3(A): “A judge shall perform the duties of judicial office … without bias or prejudice.”
  • MCJC rule 2.4(B): “A judge shall not permit family, social, political, financial, or other interests of relationships to influence the judge’s judicial conduct or judgment.” Comment [1] to this rule states, in part, that, “Confidence in the judiciary is eroded if judicial decision making is perceived to be subject to inappropriate outside influences.” 
  • MCJC rule 2.5(A): “A judge shall perform judicial and administrative duties, competently and diligently.”  Comment [1] to this rule states, “Competence in the performance of judicial duties requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary to perform a judge’s responsibilities of judicial office.”
  • MCJC rule 2.7: “A judge shall hear and decide matters assigned to the judge ….”

What can we discern from these rules that can provide guidance to judges on the use of generative AI?  Fundamentally, under MCJC rule 2.5(A), judges have a duty to be competent, and this logically extends to technological competence, including understanding generative AI, especially given that it is becoming integral in the legal community.  Understanding the fundamental workings of the specific generative AI application can avoid inadvertent biases.  Like any technology, generative AI can operate in unanticipated ways and could include factors that are not appropriate or fair when used in a court matter.  These can occur in the misapplication of applicable law or case precedents, fictionalized case cites, or narratives that can otherwise mislead.  Such bias would seem to an outside influence that could call a judge’s impartiality into question and potentially violate MCJC rules 2.2, 2.3(A), or 2.4(B).  Finally, using generative AI competently and avoiding bias would uphold the public’s confidence in a judge’s use of it, in line with MCJC rule 1.2 and a judge’s duty to decide assigned matters under MCJC rule 2.7. 

Going beyond the rules of the MCJC, other ethical problems can also arise from using generative AI, including plagiarism and providing confidential information to a generative AI program.  Perhaps the best way to view the appropriate way for a judge to utilize generative AI is to consider it analogous to a law clerk.  A law clerk can be very helpful in researching the law and facts in a case, and also in drafting a decision or order, but, in the end, it is duty of the judge to ultimately reach any conclusion on any legal issue in a case.

Hon. W. Kearse McGill

Los Angeles, CA

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.