Legal and Ethical Considerations for the Use of GenAI
Among the states that have created a framework to allow or encourage the use of GenAI, they commonly consider several of the same factors.
Accountability for the final work product. The end user of GenAI must ensure accuracy and compliance with legal and ethical obligations. Judges and users remain ultimately responsible for their decisions, even if they have used GenAI to assist them with reaching those decisions. Researchers have found that AI chatbots powered by large language models (LLMs) like ChatGPT can generate random falsehoods in its generated content––sometimes referred to as GenAI hallucination. These hallucinations can happen a third of the time or more. AI-generated output can also contain factual errors or self-contradictory statements. And the LLMs memorize data that it has learned, causing the risk of plagiarism to be high as well.
Lawyers are responsible for meritorious claims and candor to the tribunal––Model Rules 3.1, 3.3 and 8.4(c) deal with not bringing frivolous claims, making or failing to correct a false statement made to a tribunal and avoiding misrepresentations. This puts an additional onus on the attorneys to review the analysis, citations and statements of law and fact before presenting AI-generated content to the court.
Using GenAI creates additional Supervisory Responsibilities under Model Rules 5.1 and 5.3. Judges and attorneys are responsible for the actions of their subordinates and managing lawyers must establish clear policies on GenAI use, ensure training and become versed on the ethical and practical uses of GenAI. This lofty goal adds another layer of responsibility––and perhaps another risk of malpractice.
Disclosure of use. Must the attorney or judge using GenAI disclose that they have done so? Illinois and Florida say no. A judge in Texas says yes. The Fifth Circuit proposed an amendment that would require filers to check a box stating whether they had used GenAI, but public comments to the rule were mostly negative. These discrepancies will likely make their way into local rules if the state courts don’t reach a comprehensive decision.
Lawyers must also communicate with clients––Model Rule 1.4(a)(2) requires reasonable consultation with the client about how the client’s objectives are to be accomplished. As always, whether and when a lawyer must disclose GenAI use to a client depends. Certainly, if asked, we must tell the truth. Lawyers must also consult with clients when the use of a GenAI tool is relevant to the basis or reasonableness of a lawyer’s fee or when the output will influence a significant decision in representation such as evaluating potential litigation outcomes.
Understanding/informed use. Courts ask users not to use GenAI without a working knowledge and understanding of general AI capabilities and training in the technical capabilities and limitations of the specific GenAI tool prior to use. This might be the most Herculean task for attorneys, but it goes to the basic ethical rule of Competence (Model Rule 1.1). Lawyers don’t have to be experts in GenAI, but they must know what’s available, have a reasonable understanding of the specific technology and make an informed decision using their professional judgement as to whether or not to use these tools. Learn the tools and stay up to date because GenAI is evolving quickly. Be aware that if the training of the tool is based on limited, outdated or biased content, the output might be unreliable, incomplete or discriminatory. Because GenAI can’t understand meaning or evaluate context, legal professionals can’t rely exclusively on GenAI for tasks that require professional judgment.
Learning how to talk to GenAI is going to be another language for attorneys and judges to master. “Prompts”––which is the information entered into AI to receive a result––can vary widely in quality, which means that the attorneys most knowledgeable about how to craft a prompt will likely receive the best results. Try these two different prompts in your favorite GenAI tool: “Create a list of commands that an attorney might use to give instructions to artificial intelligence” versus “Create a list of commands that a family law attorney might use to give instructions to artificial intelligence.” The results from these two prompts are worlds apart.
Confidentiality. The output of GenAI is only as good as the information put into it. But confidentiality requires that we protect the privacy and confidentiality of information we are privileged to access. Just because GenAI is a nameless, faceless program, doesn’t allow us to compromise sensitive information such as confidential communications, personally identifiable information, protected health information, justice and public safety data, security related information, information conflicting with judicial conduct standards or eroding public trust. Unless you are certain that the GenAI you are using is a closed system––meaning it takes and analyzes only the information given to it by your organization and doesn’t use your organization’s information to analyze for other entities, it’s impossible to maintain confidentiality while using specific data. Even in a closed system, self-learning tools increase the risk might expose information between court staff or other firm lawyers to disclosure in inappropriate ways. At a minimum, the best practice would be to obtain a client’s informed consent prior to inputting information relating to representation into a GenAI tool. The lawyer must explain risks and benefits in common language beyond a general boilerplate provision in engagement letters.
How GenAI Is Being Used in Courts
Legal research is perhaps the most ubiquitous use of GenAI, but it has been used for creating form divorces, drafting pleadings and discovery, creating exhibits and locating errors in documents. Some courts are looking into using GenAI to screen financial reports in guardianship matters for potential red flags.
One of the unique ways GenAI is being used in the courts is to calculate how long a person will spend in jail for a crime. Proponents argue that it reduces bias in the sentencing process, can quickly and easily analyze available data and statistics on recidivism and reach a neutral decision on sentencing. Detractors argue that the neutrality of GenAI results is only as good as the data it uses and that a human decision will result in more equitable sentencing. In criminal cases, where a large amount of data is available and tracked, a large volume of cases makes their way through the courts, a speedy result is necessary and many of the participants lack time or resources, AI can certainly improve access to justice if used judiciously.
Ultimately, the overriding objective of guidelines on GenAI is to get the courts and lawyers to exercise their professional judgment. Understand it and use it with caution. Think about how to apply the existing ethical rules to this new technology. Explore it or avoid it at your own risk. As with most new technology, the legal profession will have to find a way to incorporate it and adjust.