Can AI Be Used Ethically in Law Practice?
According to Bloomberg Law’s State of Practice survey, most attorneys think so. Understanding emerging technology is critical for modern lawyers, especially when it impacts the practice of law. The duty of technology competence set forth by the ABA in Comment 8 to the Model Rules of Professional Conduct 1.1 has been widely adopted since its introduction in 2012. And most attorneys recognize that gen AI use is becoming commonplace. Though the technology may be developing too quickly for many, the legal profession has an opportunity to influence these developing frameworks, protocols and guardrails for the better. Ultimately, it is our application of these tools that will determine their effect on society, the world and our legal systems.
We need to stay curious, keep abreast of new developments, issue-spot proactively and work to gain true understanding of the available tools, how they work, what’s at stake and how to effectively counsel our clients in this area. While it may be tempting to jump right in and try all the new tools at once, it behooves the cautious attorney to pause and undertake a thorough risk assessment. Early exploration is key to early understanding, but it is also important to proceed with caution and ensure that exploratory and testing attempts do not put client interests at risk. Unanticipated consequences are likely to emerge with any new tool or process.
AI Is Not a Replacement for Human Judgment or Legal Expertise.
Using an AI tool does not relieve you of your ethical responsibilities, or accountability for the work product you ultimately deliver. It’s necessary to thoroughly review, test and edit every AI output before releasing it into the wider world, particularly when it comes to filings and representations to the court. So far, one of the most highly publicized risks for attorneys using gen AI in legal practice has been the small but steadily growing number of attorneys who have come under fire for providing false case citations that were invented by large language models. Interestingly, these fake citations tend to be formatted correctly, and even reference real members of the judiciary. The problem is that the precedents don’t exist. In the interest of generating an output that the attorney would find favorable, the tools sometimes invent convincing cases that align with the attorney’s position and arguments. This phenomenon is called hallucinating, and while we can expect model accuracy to improve with time and reinforcement training, for the time being it is important to be aware that the tools can generate highly persuasive hallucinations and have a track record of providing inaccurate citations.
An attorney’s duty to understand applicable legal precedents, verify cases remain good law and ensure the accuracy of citations must not be ignored. ABA Model Rule 11(b)(2) outlines the ethical duties that arise when representations are made to the court. Such representations must be either warranted by existing law or by a nonfrivolous argument to extend, modify or reverse existing law. If you do not have the time to verify legal authority, or the expertise to know whether the content you plan to present to the court is accurate and ready for thorough assessment, then using a generative AI tool to assist in drafting is prone to prove highly embarrassing and detrimental to both your client’s interests and your practice.
Attorneys Have a Duty to Be Honest About Their Use of AI.
We must avoid misleading clients, the court and others.
- ABA Model rule 1.4 outlines an attorney’s duty to reasonably consult with clients about how their objectives are to be accomplished,
- ABA model rule 3.3 prohibits attorneys from making false statements to the court, and
- Model rule 4.1 details an attorney’s duty of honesty in transactions with persons other than clients.
Notably, there have been several instances of attorneys being less than truthful when questioned whether AI is being used. Claiming that AI errors are the result of little-known precedents, mistakes made by inexperienced associates or otherwise inventing untrue excuses is not only unethical, it will catch up with you. The Judiciary’s patience in these instances cannot be counted upon, and some courts are beginning to implement certification and labeling requirements when AI tools are used.
Attorneys Must Explore and Understand the Limitations of AI Tools.
One such limitation is the absence of confidentiality and privilege protection. ABA Model Rule 1.6, Comment 4, prohibits attorneys from revealing information related to client representation. Generative AI tools frequently use prompts for reinforcement training and output improvement, making it important to limit the disclosure of client or matter information. There should be no expectation of confidentiality or privacy when interacting with AI tools, and disclosure of privileged information may result in waiver. While some argue that a level of anonymity exists as the data is vectorized and integrated into future training rounds, there has not yet been sufficient discussion or agreement on this viewpoint.
Enterprise-oriented tools show potential in mitigating confidentiality and privilege concerns by limiting input disclosure to third parties and promoting transparency and thorough due diligence related to training sets and reinforcement training. But, even when using enterprise-oriented tools, it is important to read and understand all licensing and data privacy documentation to determine how user inputs are utilized, who this information is being shared with and in what format.
The selection of training data has a significant impact on ethical considerations for attorneys. Pre-training requires the collection and processing of extensive unlabeled data sets. This impacts the outputs that models can generate, in turn impacting the risks that attorneys will face when integrating those outputs. For instance:
- When training sets contain biased, offensive or harmful content, this can be perpetuated in a model’s outputs.
- When personal data is included in the training data without proper consent, it can lead to privacy violations and legal consequences. Protecting the privacy of individuals, particularly minors, whose data is included in training sets is essential.
- The use of copyrighted or protected data in training sets, without proper licensing, can lead to legal disputes.
- Many models are time-bounded by the dates within their respective training sets, meaning that they cannot accurately provide information related to events that happen after their pre-training rounds.
This is certainly not an exhaustive list, and additional limitations will emerge as technology continues to develop. The burden falls on us to be mindful that pitfalls exist, they may be difficult to spot and must be diligently explored.
Together, We Can Influence AI Frameworks as They Are Being Built.
Our first instinct may be to shy away from these tools as being too risky for attorneys to use. But by avoiding new technology we deprive ourselves of the opportunity to learn how to use it in an ethical and effective manner. Knowledge is power, and the risk-spotting and mitigation efforts that attorneys engage in now have the potential to influence the entire AI development framework, for the good. No tool is foolproof, and there will never be a tool that allows attorneys to walk away from our responsibilities to clients or the profession. By embracing this, we can lead the charge in building AI processes that improve and enrich the practice of law.