What ethical considerations should we as judges take into account when looking to use generative AI in assisting us with our work? Initially, we can consider how the ABA’s Model Code of Judicial Conduct (MCJC) can guide us in the appropriate use of this technology. While not adopted everywhere in the United States, the relevant MCJC rules that are discussed below contain concepts that are generally found in all applicable rules governing judicial behavior, and multiple MCJC rules would appear to apply to a judge’s use of generative AI:
- MCJC rule 1.2: “A judge shall act … in a manner that promotes public confidence in the independence, integrity, and impartiality of the judiciary ….”
- MCJC rule 2.2: “A judge shall uphold and apply the law and shall perform all duties of judicial office fairly and impartially.”
- MCJC rule 2.3(A): “A judge shall perform the duties of judicial office … without bias or prejudice.”
- MCJC rule 2.4(B): “A judge shall not permit family, social, political, financial, or other interests of relationships to influence the judge’s judicial conduct or judgment.” Comment [1] to this rule states, in part, that, “Confidence in the judiciary is eroded if judicial decision making is perceived to be subject to inappropriate outside influences.”
- MCJC rule 2.5(A): “A judge shall perform judicial and administrative duties, competently and diligently.” Comment [1] to this rule states, “Competence in the performance of judicial duties requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary to perform a judge’s responsibilities of judicial office.”
- MCJC rule 2.7: “A judge shall hear and decide matters assigned to the judge ….”
What can we discern from these rules that can provide guidance to judges on the use of generative AI? Fundamentally, under MCJC rule 2.5(A), judges have a duty to be competent, and this logically extends to technological competence, including understanding generative AI, especially given that it is becoming integral in the legal community. Understanding the fundamental workings of the specific generative AI application can avoid inadvertent biases. Like any technology, generative AI can operate in unanticipated ways and could include factors that are not appropriate or fair when used in a court matter. These can occur in the misapplication of applicable law or case precedents, fictionalized case cites, or narratives that can otherwise mislead. Such bias would seem to an outside influence that could call a judge’s impartiality into question and potentially violate MCJC rules 2.2, 2.3(A), or 2.4(B). Finally, using generative AI competently and avoiding bias would uphold the public’s confidence in a judge’s use of it, in line with MCJC rule 1.2 and a judge’s duty to decide assigned matters under MCJC rule 2.7.
Going beyond the rules of the MCJC, other ethical problems can also arise from using generative AI, including plagiarism and providing confidential information to a generative AI program. Perhaps the best way to view the appropriate way for a judge to utilize generative AI is to consider it analogous to a law clerk. A law clerk can be very helpful in researching the law and facts in a case, and also in drafting a decision or order, but, in the end, it is duty of the judge to ultimately reach any conclusion on any legal issue in a case.