Summary
- New guidance from the ABA on use of generative AI
- Outline of obligations in using new technologies
- References to Model Rules
- Examples of challenges, dilemmas, and best practices
Have you used, intend to use, or will you be asked to use some form of AI (Artificial Intelligence) in your practice? If so, you need to put a written policy in place that outlines when, where, and how it is appropriate to engage with such technology. There are well-documented cases where lawyers have employed AI to draft briefs and articles or to perform research – in whole or in part. Some cases have been met with, let’s say, questionable outcomes (lawyers sanctioned in a personal injury lawsuit where briefs included citations to non-existent opinions and fake quotes). Despite concerns about an over-reliance on such tools, the train may have left the station as LexisNexis and Westlaw have rolled out their “AI-powered” legal research platforms (Lexis+AI™ and Westlaw Edge). Clients may expect their lawyers to find efficiencies from those tools.
In a presentation at the TIPS Cybersecurity and Data Privacy Conference, panelists discussed the ethics of using AI. The panelists, Alyssa Johnson (Barron & Newberger), John Stephens, and John Hendricks (both from Hendricks Law) analyzed how generative AI tools implicate key ethical duties: duty of confidentiality, duty of competency, duty of diligence (“generative AI,” used here, is a program that generates texts, images and other data using models from learned data, patterns or structure, e.g., ChatGPT, which responds to requests from users to generate its text based on content provided). The panel highlighted some real-world examples where courts now insist that lawyers disclose their use of AI (the scope and specifics of such disclosures are to be determined in many jurisdictions). In addition, the panel stressed other essential considerations: disclosures to clients regarding the use of AI plus any related costs or potential fee adjustments; eliminating bias; validation and correction of results; compliance with relevant jurisdictions; oversight and understanding of who is using AI and how it is being used. To that last consideration, it is becoming clear that firms will likely need to supplement or create guidelines that address how their lawyers can use and benefit from generative AI.
Thompson Reuters, the parent entity for Westlaw, reports that while regulation is in its early stages, the focus has been on the privacy rights of individuals, particularly consumer protection issues and the right to opt out. (Legalweek 2024: Current US AI regulation means adopting a strategic — and communicative — approach - Thomson Reuters Institute). Some in-house corporate departments have banned ChatGPT outright as the industry awaits more definitions of appropriate controls.
Firms have well-established policies and procedures for conflict checks, internet and email use, social media content, remote access, and other related HR or codes of conduct policies. Client obligations and the ethical and statutory oversight of the practice of law inform these policies. Just as courts have set down electronic discovery, filing, and communications policies, jurisdictions will follow suit regarding monitoring and policing attorneys’ use or potential abuse of generative AI. Apart from privacy and confidentiality, a lack of proper oversight can also lead to errors and omissions. Lawyers also may consider the risks and benefits of sharing what traditionally would have been their proprietary work product with technology open to the internet.
From briefs, memoranda, or standard motions to client updates, opinions, or newsletters, firms may have years’ worth of data and content that makes them stand out to their clients or an industry. It is foreseeable that pressures to produce advice or advocacy in the most efficient and effective way possible could lead to incorporating unreliable concepts or sources. Meanwhile, sharing your content outside of your presumably secure environment has risks. Remember, the technology is based on the user “prompting” the program with text, and then the software follows up with a response incorporating what the user said, drawing on terabytes of data to find the next most likely series of words. Once prompted, depending upon the technology, the original content has been shared outside of a firm’s confidential or secure environment, which may be especially problematic if the lawyer also shared client-generated content (even if anonymized, some fact patterns lend themselves to easy identification as some have learned in the advertising context). Not to mention that training and overseeing younger lawyers on these finer points presents an additional layer of risk management.
What updates or new guidelines should firms turn to in reconciling the dawn of this new era with their traditional way of operating? As firms and bar groups train new lawyers on confidentiality and fiduciary duties, the time has come to reframe these issues with AI in mind. Unsurprisingly, the State Bar of California weighed in with “guidelines for generative AI use.” Updated firm policies could include some “easy” fixes from those guidelines:
As noted, for a firm’s proprietary interests, additional guidance would include:
Some may feel more comfortable issuing a “ban” and modifying their use policies once the regulatory landscape has fully developed. At the very least, lawyers and firms must be aware that there likely will be a push to utilize such advances as clients, courts, and parties try to capture the benefits. The onus, as ever, will be on counsel to assess the risks and avoid the pitfalls.