At its best, AI (artificial intelligence) will assist judges and the judicial system to provide streamlined access to justice, free from human bias. As many of the articles in this issue attest, AI can guide court users to navigate many legal issues without the need for a lawyer. So too, AI can provide information to judges that is based on objective factors. However, AI is a human creation and will not always perform perfectly.
To ensure that judges are complying with their ethical responsibilities while using AI or interpreting its proper use, judges must first ensure that they understand the AI application involved. Michael Arkfeld’s piece outlines a daunting checklist for judges. Most judges will be tempted to rely on the lawyers’ knowledge of the underlying technology and its application in litigation. However, judges have an ethical duty, and a legal one, to competently evaluate the facts and arguments presented.
So too, where “guidelines,” “tools,” and “standards” have been developed and presented to assist judges in decision-making free from historical biases, judges should be aware of any biases in the development of those AI tools themselves. Some of these inquiries can be done institutionally as part of the legislative evaluation of the tools themselves. At other times, it will require the judge to assess, in the application, whether justice is being served while being ever mindful of the personal biases that these tools are designed to counteract.
In the 2016 Wisconsin COMPAS due process case, the court noted that assessment tools continue to change and evolve. “The concerns we address today may very well be alleviated in the future. It is incumbent upon the criminal justice system to recognize that in the coming months and years, additional research data will become available. Different and better tools may be developed. As data change, our use of evidence-based tools will have to change as well. The justice system must keep up with the research and continuously assess the use of these tools.” State of Wisconsin v. Eric L. Loomis, 371 Wis. 2d 235, at 242 (2016).
What all experts agree is that artificial intelligence is not equivalent to human intelligence—and especially the intelligence that we expect from judges. The Lederer article in this issue explains the limitations and evolution of AI. The human aspect of intelligence that cannot be artificially constructed is that of “judgment.” So while artificial intelligence can assist judges in many ways, judges will always have the responsibility to exercise that human trait and obligation to provide justice through judgment.
A year ago, Chief Justice Roberts, speaking at his daughter’s high school graduation, articulated the challenge for all of us. Choosing to speak about the role of artificial intelligence in our future, the chief justice noted that AI is used to tell us what to read and watch and tells politicians what positions to support by feeding information on what their constituents want to hear.
While not addressing judges, he could be speaking to any judge when advising the young graduates to do the most difficult task: sit and reflect. “Acquiring more information is less important than thinking about the information you have.” Beware the Robots, ABAJournal.com (June 8, 2018).
The ethical challenge for us all is to maintain that human strength while adapting to the enormous information revolution that artificial intelligence provides. As the Model Code of Judicial Conduct mandates: “A judge shall uphold and promote the independence, integrity and impartiality of the judiciary. . . .” ABA Model Code of Judicial Conduct, Canon 1. Wisdom will require the ability to use artificial intelligence to enhance integrity and impartiality, tempered by human judgment.