chevron-down Created with Sketch Beta.

Law Practice Magazine

The TECHSHOW Issue

The Ethics and Regulation of AI

Jayne R Reardon and Tom Martin

Summary

  • The legal landscape is witnessing growing efforts to regulate AI, both nationally and internationally, with initiatives such as the American Bar Association’s Resolution 604 and the European Union’s AI Act.
  • Examination of AI's ethical and regulatory challenges in the legal sector.
  • Generative AI has become a significant force in the legal profession, prompting concerns and uncertainties among legal practitioners.
The Ethics and Regulation of AI
iStock.com/Shahid Jamil

Jump to:

“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown.” ––H. P. Lovecraft

Artificial intelligence (AI) has been with us for decades. But this new strain of generative AI burst onto the scene a year ago and is propagating exponentially, accomplishing more in that time than ever before. It is a technology built on modeling language so it is unsurprising that lawyer’s work, focused on language (definitions, precedent and argumentation), would fall quickly within its grasp.

The unknown of how AI works compounded by the unknown of what it will subsume has caused and is triggering uncertainty and fear in the legal profession. A natural response to fear is to attempt to control, and that attempt is now manifesting itself as ethical constraints and regulation of AI.

In this article, we will discuss these burgeoning efforts to control AI with ethics and regulation, but before we get there, let’s set the table.

What Is Artificial Intelligence?

AI is not a single technology but comes in different forms. Machine learning imbues computer systems with the ability to learn from data and improve their performance without being explicitly programmed. Generative AI, which evolved from machine learning, can generate new data, such as images, video, audio, text or computer code, from existing data.

Language modeling, or LM, is a subset of generative AI, and uses various statistical and probabilistic techniques to predict a given sequence of words occurring in a sentence. Language models analyze bodies of text data to provide a basis for their word predictions. Large language models (LLMs) refer to the size of the text data, i.e., massively large data sets are used.

There are a whole host of ethical issues that can trip up lawyers as we forge into this continually transforming technological landscape.

Ethical Considerations

The ethical obligations of lawyers vary state by state but are generally reflected in the ABA’s Model Rules of Professional Conduct (Rules). The Rules apply to the use of AI just as much as they apply to the use of traditional tools (such as Shepard's Citations) and technological ones (such as Microsoft Word).

Competence

First, lawyers should know by now that they have an ethical duty of technical competence ensconced in Model Rule of Professional Conduct 1.1 adopted by 40 states. Comment 8 to that rule states that to maintain the requisite knowledge and skill to be competent, a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” This means lawyers need to understand the emerging technology of generative AI enough to be familiar with both how it can benefit their clients and practice as well as how it can pose risks to their clients and practice.

On a related note, a lawyer’s duty of competence and duty of diligence under Model Rule 1.3 arguably requires them to review and understand the terms of service and any data security representations of any AI technology.

Client Confidentiality

The nature of a generative AI tool is that it uses all data it has been “trained on” to predict the next sequence of words. Unlike rules-based research we are used to performing via the legacy versions of WestLaw or LexisNexis, for example, the information we put into ChatGPT is not “erased” once the search results are returned. Providing client data to an AI tool may very well violate client confidentiality required by Model Rule 1.6 which:

requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client or who are subject to the lawyer’s supervision. See Rules 1.1, 5.1 and 5.3. Comment 18

When transmitting a communication that includes information relating to the representation of a client, the lawyer must take reasonable precautions to prevent the information from coming into the hands of unintended recipients. Comment 19

The same duty to not use or reveal information is owed to prospective clients as well under Model Rule 1.18.

The terms of use for ChatGPT and other AI platforms make it clear that any content shared may be used for training purposes and the onus is on the user to opt out if they do not want the content used that way.

Another ethical conundrum is whether and to what extent providing client data to an AI technology may waive attorney-client privilege. We have found no case law addressing this issue yet but urge lawyers to think proactively about how to mitigate this risk.

Supervising People and Assistance

Model Rules 5.1 and 5.3 provide that attorneys have a duty to supervise lawyers and other personnel working with them. Attorneys should ensure that those in their organization using AI products––lawyers and other personnel alike––are properly trained and understand the ethical considerations surrounding its use. Although Rule 5.3 was promulgated long before the advent of AI, Comment 3 makes clear that lawyers must supervise the services provided by nonlawyers, such as a document management company, to make sure the services are compatible with the attorney’s own professional obligations.

It is up to the lawyer to analyze the accuracy and applicability of responses received from an LLM. LLMs are trained on large amounts of test data. A response to a prompt may not be as up to date as you would like and it may not be as relevant as you need in a given context. For example, while a chatbot may provide answers to a legal prompt like how to evade eviction, it may not be appropriate to the user’s jurisdiction or statutory requirements. This is not the shortcoming of the chatbot, because it only generates text based on probabilities and patterns learned from its training data.

Don’t end up sanctioned like Attorney Schwartz in the Avianca case. Attorneys must closely examine any cases cited and the subsequent treatment of a case to ensure its authority before relying on its use. Likewise, attorneys should train their legal professionals to verify outputs before using them.

Informing Clients/Courts

An untested question is whether lawyers should be required to inform their clients about the use of AI.

Rule 1.4, entitled “Communication,” requires a lawyer to inform the client of any decision or circumstance with respect to which the client’s informed consent is required. Informed consent in turn is defined in Rule 1.0(e) as agreement by a person to a “proposed course of conduct after the lawyer has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct.” Rule 1.4 also requires a lawyer to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.”

This obligation has not been interpreted to require informing clients about technological tools such as case management tools or e-discovery. However, perhaps in a harbinger that AI will be treated differently, lawyers for convicted Fugees rapper Pras Michel filed a post-trial motion for a new trial on the basis that Michel’s lawyers “botched” the closing argument by using AI. The court filing asserts that counsel relied on the AI program EyeLevel.AI embedded in CaseFile Connect and further, that Michel’s attorneys had an undisclosed financial interest in CaseFile Connect. We should keep an eye on how this pans out.

Related to whether clients should be informed of the use of AI tools is the issue of how a lawyer charges clients for services that may be rendered more efficient by using such tools. Model Rule 1.5 requires a lawyer to charge reasonable fees. The time savings an attorney may enjoy through the use of technology should be passed along to the client.

Some judges, in the months following the Avainca case, publicity began entering standing orders requiring counsel to disclose whether their pleadings or briefs were prepared with the use of generative AI. Compliance with such directives would be virtually impossible, as the ostensibly reportable applications are constantly changing as generative AI is being incorporated into everyday programs, including Microsoft 365 and Google Apps.

In February 2023, at its Midyear Meeting, the American Bar Association passed Resolution 604 urging organizations involved in AI to adhere to specific guidelines:

  1. AI developers and operators should ensure that AI systems are under human authority, oversight and control.
  2. Those responsible for using AI products and systems should be held accountable for any harm or injury they cause unless they have taken reasonable measures to prevent it.
  3. AI developers should ensure transparency and traceability in their products while safeguarding intellectual property by documenting key design and risk decisions related to data sets, procedures and outcomes.

Similar concerns have been raised in policies and regulations proposed and promulgated by other bar associations and worldwide.

AI Regulation

Although AI regulation is still nascent, 31 countries have passed AI legislation and 13 more are debating AI laws.

Enacted on June 16, 2023, the European Union’s (EU) AI Act aims to establish itself as the “world's first comprehensive AI law.” At the core of the EU's strategy lies the categorization of AI systems into four risk tiers, each of which is governed by distinct regulations. Executing this plan presents formidable hurdles, including the intricate task of defining AI systems and assessing AI-related risks.

Issued October 30, 2023, the White House’s Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” However, the EO’s requirement that companies’ training foundation models “must notify the federal government when training the model, and must share the results of all red-team safety tests” may have a chilling effect on competition and new market entrants given the cost of red-teaming exercises ranging in the six figures.

Perhaps not surprisingly, the U.S. approach includes nonbinding recommended actions while the EU’s AI Act is binding legislation that, if enacted, would directly regulate use cases or applications of AI algorithms. A study of 1,600 AI policies around the world found that just 1 percent aim to control the results produced by AI, rather than the ways AI is used. Regulating AI's uses is more challenging because they constantly change, whereas the risks from the outcomes of AI can be defined more consistently, no matter the specific use of AI.

There remains much to be seen in how the regulation of AI can keep pace with the growth of AI itself.

The advent of generative AI marks a pivotal point of transformation. This is not merely a technological upgrade, but a fundamental rethinking of how legal services are to be delivered and regulated. As legal practitioners, the onus is now on us to extend our expertise beyond traditional boundaries and embrace technical proficiency in AI technologies while at the same time ensuring competency and client confidentiality.

Nationally and internationally, there is a palpable sense of urgency as legislative bodies grapple with the task of creating AI-specific laws. This is a race not just against technology’s rapid pace but also a quest to harmonize these advancements with the ethical fabric of the legal profession.

As we become more familiar with AI, ethical rules and policies may grow less reactionary and more forward thinking. The fear of the unknown will give way to a bright, hopeful future with greater efficiency, accuracy and accessibility, opening new horizons for justice and legal services in the age of AI.

We will discuss all this and more at TECHSHOW 2024. Hope to see you there.

    Authors