chevron-down Created with Sketch Beta.

State Says Judges and Staff May Use Generative AI

Michelle Hayes

Summary

  • A state supreme court has issued an Interim Policy on the Use of Generative AI by Judicial Officers and Court Personnel. 
  • Section leaders welcome this policy as an effort by the judiciary to promote accuracy and transparency.
State Says Judges and Staff May Use Generative AI
PixeloneStocker via Getty Images

Jump to:

To ensure the safe and appropriate use of generative artificial intelligence (AI), the Delaware Supreme Court has issued an Interim Policy on the Use of Generative AI by Judicial Officers and Court Personnel. ABA Litigation Section leaders welcome this policy as an effort by the judiciary to promote accuracy and transparency. Litigation Section members should be aware that courts may use generative AI to process their filings. Over time, courts are also expected to issue more specific policies concerning the use of generative AI.

Policy Allows Use of AI But Sets Limits

As the administrative head of the lower state courts, the state supreme court concluded that judicial officers and court personnel in the state may use AI in their work. But the court set limits on when and how the technology can be used. Under the interim policy, judicial officers and court personnel must “use caution” if they employ generative AI tools. They are responsible for the accuracy of all their work product when relying on the output of generative AI. Any AI products used for official duties must be approved by the court’s administrative arm.  

Further, the policy emphasizes personnel should not use the approved generative AI tools without a working knowledge and understanding of the tools. To this end, the policy states individuals should receive training on the technical capabilities and limitations of the approved tools before using them. And individuals are not allowed to “delegate their decision-making function” to generative AI, the policy states. Users also may not input any non-public information into non-approved generative AI. However, the policy leaves open the possibility that non-public information may be input into approved AI tools. A related press release explained the “policy advises against using non-approved generative AI programs—which could potentially make confidential information public . . . .”

Like other technologies, there are possible benefits of using generative AI, but “there are potential pitfalls and dangers associated with it and we believe this interim policy provides our judges and employees some needed and appropriate guardrails,” Delaware Supreme Court Justice Karen Valihura said in the press release. Valihura is co-chair of the Delaware Commission on Law and Technology, a court branch focused on addressing developing technology. The court added in its press release that the new policy is designed to be brief due to the quickly evolving nature of generative AI technology.

Policy’s Proactive Approach Helps Maintain Public Trust

Section leaders generally welcome the policy as an important step. This policy “expresses the desire of the court to be open to the use of AI with the correct safeguards,” remarks Beth L. Kaufman, New York, NY, Co-Chair of the Section’s Committee on the American Judicial System. “Judges and attorneys will need to become comfortable with how to use AI because it is likely to permeate society in coming years,” adds Kaufman. “Courts should be proactive—as Delaware has been—in order to manage this new technology and its use responsibly across the judicial system,” notes Jeanne M. Huey, Dallas, TX, Co-Chair of the Section’s Ethics & Professionalism Committee.  

Courts “also must be transparent about how it is being used in order to maintain public trust in the judicial system,” explains Huey. “The key takeaway for attorneys from this policy is that anything submitted in a court filing . . . could be processed by AI in ways they may not expect or be fully aware of, which could have implications for privacy, accuracy, and fairness in the legal process,” remarks Huey.

Currently, for their part, “attorneys must understand and follow the ethics opinions on AI use in the jurisdictions in which they practice,” Huey reminds Section members. “ABA Formal Ethics Opinion 512 gives a thorough and detailed analysis of the relevant ethics rules and how those rules affect what attorneys can and cannot do with generative AI in their practice,” notes Huey.

What Section Members Can Expect in the Future

“Every jurisdiction will soon need a policy for court use of AI as well as rules for use by attorneys and pro se parties as part of their local rules of practice or in their scheduling orders,” anticipates Huey. “Over time, more detailed policies will likely be needed to ensure that work product generated with the assistance of AI meets professional responsibility standards,” explains Kaufman. “This interim policy is a good start, but we should expect the courts to soon clarify which AI tools will be used and the specific policies and procedures for that use,” adds Huey.

 “Attorneys and their clients need to know how the information in court filings will be handled in this new environment, Huey notes. While this policy does not give the specifics, we can” anticipate “that those will be set out in public policies and procedures that will follow,” she adds. “The judiciary may not have the budget for comprehensive AI infrastructure and in the future, the bench and bar may work together to further develop best practices to maintain accountability and transparency in the use of AI,” states Kaufman. 

Resources

    Author