chevron-down Created with Sketch Beta.

The Business Lawyer

Winter 2023/2024 | Volume 79, Issue 1

Ethics 3.0—Attorney Responsibility in the Age of Generative AI

Jon Meredith Garon


  • Professional ethics rules require attorneys to be competent in their understanding and use of technology, which includes use of AI, duties to maintain privacy, adopt cybersecurity best practices, and more.
  • Artificial intelligence provides excellent opportunities for new attorney efficiency but also requires an understanding of AI’s limitations and appropriate use.
  • Social media and metaverse environments provide excellent opportunities for marketing, client communications, and public engagement, but these uses must be limited by the lawyer’s ethical obligations regarding client and third-party communications.
  • Modern attorney competency requires a broad understanding of the tools being used by the attorney and the technological environment in which the attorney’s clients operate.
Ethics 3.0—Attorney Responsibility in the Age of Generative AI Marjanovic

Jump to:

“[A] lawyer without books would be like a workman without tools.”

– Thomas Jefferson


The practice of law has gone digital. Technology has transformed the mechanics of practicing law through the remote access to one’s office; reliance on smartphones to share data, email, and use social media to communicate with clients; machine learning to anticipate judicial decisions; cloud-based outsourcing to store records; artificial intelligence (AI) to conduct valuations; and legal practices existing only in the metaverse. Law is by no means alone, and to some extent, the profession saw the changes coming.

In 2009, the American Bar Association created the Commission on Ethics 20/20 (Commission) to “perform a thorough review of the ABA Model Rules of Professional Conduct [(MRPC or Model Rules)] and the U.S. system of lawyer regulation in the context of advances in technology and global legal practice developments.” The Commission held hearings and developed draft statements regarding a number of topics, including the effect of technology on a lawyer’s duty of confidentiality and client development.

In 2012, the ABA House of Delegates adopted the Commission’s recommendations. In the decade that followed, the introduction of the metaverse, cryptocurrencies, NFTs, and blockchain technologies, as well as challenges associated with a worldwide pandemic, forced an ever-greater need for lawyers to address technological issues in their practice.

Through the Commission, the ABA embraced the importance of technological change as fundamental to the practice of law. The ABA adopted an amendment to MRPC Rule 1.1, Comment [8] as follows:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

In making this duty explicit, the ABA heightened the level of responsibility that law firms must address regarding the technological mediation of their services. Through the expanded understanding of competence, the expectation now includes an attorney’s duty to understand the benefits and risks of technology available for the practice of law.

This article focuses on the obligations of client confidentiality, the duty to understand cybersecurity, the need to proceed with caution when exploiting the new technologies of generative AI and the metaverse, and the need to communicate in a permissible manner. These are all key obligations under the Model Rules related to the use of technology.

Still, the Model Rules are not necessarily binding law and the comments thereon are not the basis for attorney discipline. Although many jurisdictions have adopted them in whole, others have adopted (or updated) them on a rule-by-rule basis. The ABA Center for Professional Responsibility reported that, as of April 4, 2023, “[t]hirty-nine (39) jurisdictions ha[d] adopted a statement on tech competence.” Other jurisdictions may have language that addresses these concerns more obliquely. Nonetheless, the Model Rules provide a normative guideline that goes beyond the technical requirements for minimum competency and may provide standards for professional malpractice liability or other culpability. As a result, the Model Rules provide a common baseline for understanding the technological competence required of practicing attorneys.

To fully understand the scope of a lawyer’s duty regarding technology, the practitioner must go beyond the Model Rules. Examples abound. Under the Health Insurance Portability and Accountability Act (HIPAA), data privacy and security rules occasionally apply to legal services, subjecting law firms to those strict privacy and security obligations. The Export Administration Act and the International Traffic in Arms Regulations may render illegal certain digital distributions. Similarly, while most law firms do not meet the threshold requirements of earning more than $25 million in annual revenue and holding the personal information for at least one hundred thousand California consumers or households, those firms that meet this standard must comply with the California Consumer Privacy Act. Other states are enacting similar laws, each with its own threshold requirements, obligations, and operations. Lawyers must also adhere to the truth-in-advertising obligations established by the Federal Trade Commission (FTC). A lawyer’s duty to remain competent and diligent in light of technological change begins with the Model Rules but other areas of substantive law may extend that duty.

I. The Expanded Obligations of Client Confidentiality

In 2012, the ABA amended the Model Rules by adopting Rule 1.6(c), which requires that “[a] lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Subsection (c) addresses confidentiality concerns associated with electronic information and extends the affirmative duty to data privacy, security, and reliability.

The ABA adopted a “reasonable efforts” standard to apply to the lawyer’s duty of confidentiality. There are a number of common technologies that a law firm should address when determining its baseline technology risk. The law firm can then adjust those standards to provide heightened protection when the nature of the information warrants additional protection.

In the context of financial institutions, financial services regulators have created the FTC Safeguards Rule under the Gramm-Leach-Bliley Act that outlines the steps needed to protect customer information. The Safeguards Rule as well as the HIPAA Security Rule use the same three-pronged model for data protection, requiring: “administrative, technical, and physical safeguards to protect the security, confidentiality, and integrity of customer information.”

“When thinking about protecting personal information, most people begin with the technical safeguards. These are the basic steps that can reduce the risks from most identity theft and other forms of fraud.” Technical safeguards include firewalls; active software updating and patching to ensure that no outdated software is in operation; device encryption; strong unique passwords for each account; two-step verification of identity; internet encryption using HTTPS; and similar techniques.

Law firms have a duty to manage themselves and their employees and agents to ensure that everyone with access to confidential information remains trustworthy and diligent. Law firms, for example, might consider disabling the “reply all” button to ensure that each recipient of an email is an intended recipient. An attorney also can trigger an inadvertent disclosure from a lack of understanding about the tools being used. Attorneys who are unaware of the metadata in word processing documents may inadvertently leave confidential client information in the metadata associated with documents shared publicly. Attorneys may not be aware of the geo-location information they provide to Google Maps and other apps when traveling. This data could be combined with other information to disclose the target of a corporate acquisition or of a future lawsuit. Law firms should employ technical measures to reduce the likelihood of inadvertent disclosure of data by attorneys and staff.

“Physical safeguards are the physical measures, policies, and procedures to protect … systems and related buildings and equipment from natural and environmental hazards, and unauthorized intrusions.” Physical security focuses on buildings, offices, computer network server rooms, mechanical locks, and other physical steps that can be taken to protect files, computers, and storage devices. Physical safeguards also include the steps taken to mitigate against natural disasters and other catastrophic events, such as off-site storage of backup files and the ability to operate remotely in the event that the firm’s offices are closed.

Administrative safeguards can be understood as the “administrative actions, policies, and procedures [that] manage the selection, development, implementation, and maintenance of security measures.” “[T]he purpose of creating cybersecurity regulations with administrative safeguard policies [is] to ensure that senior management [is] paying attention to privacy and security.”

A firm’s plan should identify the individuals responsible for cybersecurity, privacy, and confidentiality by job title, so the plan remains valid notwithstanding employee turnover. Senior firm leadership must be directly responsible for implementing the plan. The specifications of the technical and physical safeguards listed above should be detailed in the plan.

A meaningful plan includes training of all individuals and entities that have any access to confidential information, from senior partners to student interns. Training should be tailored to the role of the individual. Attorneys who have supervisory responsibility need training to ensure that those being supervised also meet their obligations.

Another important aspect of the program is data minimization or access control. If (or when) a cybersecurity breach occurs, the damage can be largely mitigated if each person’s access to confidential information is limited to that information the person needs to provide legal services. If, instead, any attorney can troll through all the legal files of the law firm without restriction, then a breach of any attorney’s account lays open every client file for theft and disclosure.

The plan should require end-to-end encryption of all confidential records, limit use of unencrypted email, enforce stronger password policies and establish a set of standards and contractual requirements for all vendors of the law firm. As noted by the FTC, an appropriate plan includes strategies to mitigate cyber risks, as well as risks associated with natural disasters and other catastrophic events. Every plan should include provisions for off-site storage of backup data along with testing of the restoration of the backup data.

Finally, the plan should include the operational response for a data breach, emergency, or natural disaster. The plan should identify who is to be contacted; the steps to address data breach notification obligations of both clients and of individuals protected by various data breach notification regimes; any law enforcement agencies to be engaged; any vendors to be involved; and anything else that reflects a decision-point that could slow the response.

II. The Intersection Between Privacy and Generative AI

Beginning in 2022, one decade after the Commission made its recommendations, the stock market became giddy with the cultural phenomenon unleashed by the public launch of OpenAI’s ChatGPT. Generative AI systems can produce data that mimics the content of human creativity. These neural networks can generate content in the form of text, voice, pictures, videos, software, physical and molecular designs, audiovisual works combining these features, and more.

Generative AI services are far more than search engines because they do not merely find published content: They evaluate, combine, and synthesize the known information to provide an answer of their own. Trained on the data with which they are provided, they develop their own responses to the questions presented. Certain generative AI services can be integrated with an “extractive” or “abstractive” process that produces a shortened or summarized version of texts or documents. “Extractive AI is all about taking existing information and using it to answer specific questions or generate new content, while generative AI is all about creating new information from scratch.”

Because generative AI is trained to see patterns and provide pleasing patterns to the user, it is excellent at providing text with an air of authority. But unless the system also uses some form of extractive AI to validate its response against known sources and limit its identification of facts to those found in external, verified information sets, generative AI is simply non-factual. Although the AI industry has labeled the failure to be accurate as “hallucinations,” this term hides the structure of generative AI. Generative AI output is only accurate to the extent that the pleasing patterns of information normatively correlate with the user’s general understanding of the truth.

A district court reminded the profession of this simple truth in Mata v. Avianca, where a law firm used generative AI to write portions of a brief.

Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the ‘Levidow Firm’) (collectively, ‘Respondents’) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

The Respondents had little experience with ChatGPT. They did not conduct ordinary legal research in writing their brief. Schwartz testified that he was “operating under the false . . . disbelief that [ChatGPT] could produce completely fabricated cases.” He further claimed, “I still could not fathom that ChatGPT could produce multiple fictitious cases, all of which had various indicia of reliability, such as case captions, the names of the judges from the correct locations, and detailed fact patterns and legal analysis that sounded authentic.” There is little evidence that Schwartz actually checked the quality of the source material, but he provided an explanation as to why he used the AI-generated materials: “My reaction was, ChatGPT is finding that case somewhere. Maybe it’s unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up.” Even if true, none of those justifications seems to permit the use of the cases. The apparent reality was fiction; ChatGPT generated plausible but non-existent case materials.

Mata feels apocryphal, a campfire story for legal writing faculty to scare first-year law students to properly check their sources and Shepardize their cases before submitting their work to any professor or court. But the concern is real and growing.

Judge Brantley Starr of the Northern District of Texas responded to these concerns by issuing a standing order restricting the use of generative AI:

All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias.

Judge Starr’s order provides flexibility to allow generative AI to be used as part of a human-reviewed research process. Judge Starr emphasizes the importance of addressing bias in one’s writing and points to obligations to adhere to candor to the tribunal, duties to uphold the Constitution and law, and more general ethics obligations.

While data providers are generally enthusiastic about the potential for generative AI services, those in the legal sector understand that extractive AI is the core of legal research. Within the legal industry, providers understand they must combine extractive AI tools as “guardrails” to limit the generative AI output. Each of the two largest legal research services, RELX’s Lexis and Thomson Reuters’ Westlaw, entered the AI arms race, as have their parent companies more generally. The race between these two legal and business giants heated up when Thomson Reuters spent $650 million to acquire Casetext, which has been innovating through its early access to ChatGPT-4.

The widespread proliferation of generative AI content and platforms comes only a year after the metaverse, another Silicon Valley darling, took the industry by storm. Although the metaverse is often described as complex, the idea is actually quite simple: It is a virtual Mall of America, a state fair, or a Renaissance festival. Like each of these gathering spaces, Roblox and the other metaverse platforms offer games, amusements, and social events, while building a virtual economy. People participate to interact with others and to share their common experiences.

Attorneys already practice via virtual offices that exist solely in metaverse environments. To remain compliant with the applicable ethics rules and privacy obligations, such attorneys should use end-to-end encryption of their communications. Those offices should have technical measures to restrict access to their confidential client information both from third-party hackers but also from the platform operator, plug-in vendors, and related service providers. The metaverse does not have conference rooms; it has virtualized “Zoom-like” chat rooms. As such, the attorneys must assure that those online environments are not subject to eavesdropping, whether a live conversation or a digitally preserved one.

In addition to technical measures, attorneys should follow the administrative and physical measures of the Safeguards Rule to ensure confidentiality, data integrity, and protection from eavesdropping. Physical security for the metaverse means that the attorney is paying attention to the environment in which the legal services take place and that all vendors meet their privacy and security requirements. In other words, a crowded coffee shop creates only an illusion of privacy. If a laptop’s screen is observed by another person, or the customer at the next booth listens to a conversation, then privilege does not exist, and security has not been maintained.

As to the administrative requirements, the Safeguards Rule focuses on contractual obligations to ensure the necessary privacy and security to maintain confidentiality and privilege. The metaverse vendor agreements must be sufficiently robust so that the vendor stands behind the security it is providing. The agreement must also specify that the data gleaned from the parties’ interactions on the generative AI platform or metaverse environment is not collected and exploited in a manner that could interfere with the firm’s duty of confidentiality to its clients. Names, unique identifiers, geolocation data, and similar information could reveal confidential information to third parties in violation of the firm’s ethical or legal obligations.

Finally, because both AI services and metaverse services are provided through third-party vendors, these technologies highlight the addition of Comment [3] to MPRC 5.3, which requires that, “[w]hen using such services outside the firm, a lawyer must make reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.” To meet the reasonableness test, an attorney should make confidentiality requirements, data-retention obligations, and data-breach-notification obligations express contractual terms. The agreement should also include the ability to assess or audit compliance with those contractual obligations. As an ever-increasing number of legal services are conducted through third-party technologies, lawyers should contractually impose their confidentiality and privacy obligations on to their vendors.

III. The Ethical Lawyer’s Digital Presence

The ubiquity of the Internet has required that lawyers engage their clients where the clients expect to get services, namely in the online environment. When considering the ethical obligations regarding online legal services, the starting point has not changed. MRPC Rule 7.1 requires truthfulness and accuracy about the lawyer and the lawyer’s services. This requirement extends to all communication and necessarily includes electronic and online communications. Comment [3] was modified slightly so that the duty is owed to the public generally, rather than to just prospective clients.

This affirmative duty to the public has certain unique consequences in the social media and public space. For example, a lawyer should refrain from overstating the nature of the lawyer’s practice or presence. Online tools can easily create an illusion of a national or global presence when, in reality, a firm has only a local or regional presence. Content on a lawyer’s website or blog could lead to confusion if the disclosure misstated or falsely implied that non-attorney content had been prepared by an attorney within the firm. Simple attribution would generally resolve this issue.

The Commission codified an earlier ethics opinion regarding the conditions upon which a person can become a prospective client. The ABA amended the text of MRPC Rule 1.18 to define a “prospective client” as “a person who consults with a lawyer about the possibility of forming a client-lawyer relationship,” rather than merely someone who “discusses” the possibility. The change clarifies that a “prospective client” is a person who has a “reasonable expectation that the lawyer is willing to discuss the possibility of forming a client-lawyer relationship.”

The District of Columbia Bar Association issued a pair of ethics opinions involving social media that aid in understanding the parameters of a lawyer’s obligations regarding advertising and client communications. Opinion 370 addresses “Social Media I: Marketing and Personal Use,” while Opinion 371 covers “Social Media II: Use of Social Media in Providing Legal Services.” The DC Bar’s Committee on Legal Ethics (Committee) began Opinion 370 with a note of caution: “Increasingly, attorneys are using social media for business and personal reasons. . . . The Committee notes that any social media presence, even a personal page, could be considered advertising or marketing, and lawyers are cautioned to consider the Rules applicable to attorney advertising . . . .”

The Committee noted that ethical rules and interpretation of those rules vary from state to state; the tools of social media do not respect the geographic boundaries that are required of law licensure; there are many ways to use social media that may exceed personal or social communications that are not being addressed by the Committee’s opinions.

Summarizing the concerns regarding personal use of social media, the Committee highlighted areas in which an attorney could inadvertently overstep the activities permitted under the rules:

  • “Communications via social media are inherently less formal than more traditional or established forms of communication.”
  • “Content contained on a lawyer’s social media pages must be truthful and not misleading.”
  • “[I]f an attorney connects with, or otherwise communicates with, clients on social networking sites, then the attorney must continue to adhere to the Rules and maintain an appropriate relationship with clients.”
  • “[S]tatements on social media could expose a lawyer to civil liability for defamation, libel or other torts.”
  • “[The Committee recommended that] all law firms have a policy in place regarding employees’ use of social networks [as lawyers in law firms] have an ethical duty to supervise subordinate lawyers and non-lawyer staff to ensure that their conduct complies with the applicable Rules, including the duty of confidentiality.”

To address the inadvertent formation of client relationships, the Committee suggested disclaimers. “Disclaimers are advisable on social media sites, especially if the lawyer is posting legal content or if the lawyer may be engaged in sending or receiving messages from ‘friends,’ . . . when those messages relate, or may relate, to legal issues.”

In Opinion 371, the Committee emphasized that competent representation may require the attorney to review the content of the client’s social media activities in both a transactional and litigation setting. “In litigation, client social media postings could be inconsistent with claims, defenses, pleadings, filings, or litigation/regulatory positions.” The opinion highlighted the obligation “to ensure that claims and positions are meritorious under Rule 3.1, which requires a non-frivolous basis in law and fact, and that misrepresentations are not made to courts or agencies in violation of Rules 3.3 and 8.4.” The opinion emphasized that, in the transactional context, “review of client social media for their consistency with representations, warranties, covenants, conditions, restrictions, and other terms or proposed terms of agreements could be important because inconsistency could create rights or remedies for counterparties.”

At the same time, lawyers must not entirely shun social media. On the contrary, lawyers conducting appropriate investigations have an obligation to review the social media postings of adverse parties as part of their factual review of ongoing matters. In addition, to the extent that an attorney’s client is itself subject to regulatory oversight of its online and social media activities, the attorney must be competent and engaged to ensure that these obligations are met. In contrast, it is not acceptable to “friend,” or connect online with another person involved in litigation, particularly an unrepresented opposing party or juror.


The evolution of technology has affected every facet of legal practice. Lawyers have recognized this, and the profession has worked diligently to keep itself apprised of the changes to law and technology within each area of practice. A decade after the Commission’s recommendations, however, the modifications to the MRPC highlight the changes affecting not just clients but lawyers themselves. Technology should serve the client and the relationship. The technological lawyer remains, first and foremost, a client-centered lawyer. This is the heart of ethical lawyering and the fundamental principle underlying the rules of technological competence.