chevron-down Created with Sketch Beta.

Ethical Dilemmas of AI in Law and Lessons from Red Teaming

Daniel J Siegel

Summary 

  • Understand the ethical dilemmas and security risks of generative AI in law.
  • Red teaming exercises reveal AI vulnerabilities.
  • Continuous efforts are needed for AI security in legal practice.
Ethical Dilemmas of AI in Law and Lessons from Red Teaming
Sam Edwards via iStock

Jump to:

Size doesn’t matter. There, I said it. I know you think I’m crazy. Of course, it matters.

Maybe it’s relevant for some things. But it certainly isn’t relevant when it comes to generative artificial intelligence (GenAI) and attorney transgressions.

The perception is that the only lawyers who get into trouble by misusing AI are small firm lawyers. After all, they make the headlines, and they don’t have ethics counsel and others to clean up things. But that notion has been debunked again and again when you dig a little deeper. It seems that AI disease impacts lawyers from firms of every size.

It seems that, every day, you read about another attorney who commits a gaffe by using AI.

Usually, it is the ones who cite false cases. I’ll talk about them later.

But the other group is just as noteworthy. That group encompasses lawyers who go to and input confidential information into a chatbot, where it can be revealed to the world.

That reality––the one no one thinks about––was confirmed in a recent study, “Lessons from Red Teaming 100 Generative AI Products.” Compiled by Microsoft employees, the paper discusses their “experience red teaming over 100 generative AI products at Microsoft.” The import for lawyers is that the lessons learned from the red teaming relate easily to lawyers’ ethical duties under the Rules of Professional Responsibility.

“Red teaming” is a cybersecurity practice where a group of people (a team), acting like malicious attackers, simulates a cyberattack to try and identify vulnerabilities in an organization’s systems and security measures. In many ways, a “red team” is like a group of white hat hackers, commonly known as “ethical hackers.” Ethical hackers try to find the danger spots before the bad guys do.

The conclusion of most relevance, and one that lawyers often try not to think about, is that GenAI systems will never be completely secure. Like most new technologies, lawyers (and other users) often take AI security for granted. After all, it’s a new toy. It’s fun; it’s versatile. And it can be a real time saver.

That is, if you use it properly and ethically.

Lawyers use GenAI for legal research, contract analysis, predictive analytics, case management, document management and countless other ways. The problem is that untested or flawed AI models can pose significant ethical, legal and security risks. It isn’t just the hallucinations––the fake citations that AI seems to make up all the time. Red teaming is all about security.

Red teaming helps companies identify these dangers and helps to ensure AI compliance with their ethical responsibilities. It also identifies when their AI tools are susceptible to data extraction, i.e., leaking or exposing confidential client data. Red teaming can help to identify security flaws before an attack occurs.

That’s where the Microsoft study comes in. As is already common knowledge, AI chatbots, legal assistants and research aides can also store, share or expose privileged information. Sometimes, they do it accidentally, sometimes by design.

Red teaming helps companies assess how AI handles confidential client data and whether it is consistent with the privilege doctrines. All these measures can help ensure compliance with data protection and privacy laws.

Although not explicitly focused on the use of AI in law firms and other legal settings, the Microsoft study highlights issues that are equally relevant to all businesses, including law firms. While the study summarizes eight different lessons, lawyers should focus on number eight: “The work of securing AI systems will never be complete.”

Lesson eight emphasizes the perpetual nature of AI security. It reminds users that the security need of AI systems is an ongoing process that will never be fully resolved through technical solutions alone. It requires continuous efforts, including economic considerations, iterative break-fix cycles, regulatory measures and ethical considerations, particularly for legal professionals. Thus, lawyers have to be vigilant and recognize that 100 percent security is impossible.

That’s right. AI systems will never be 100 percent secure. That doesn’t mean lawyers can’t use them, but it does emphasize the need to be vigilant––and careful.

The paper repeats the well-known phrase that “no system is completely foolproof.” In doing so, the paper reminds readers that there are inherent vulnerabilities in all systems, including AI. Thus, even the most secure systems with the greatest safeguards can be compromised by well-resourced adversaries or human error.

And hacks will happen. They do happen. They have happened with other technologies, and they will happen with AI.

Large law firms that have been hacked include Kirkland & Ellis; K&L Gates; Proskauer Rose; Orrick, Herrington & Sutcliffe; and Allen & Overy. Courts have been subject to ransomware. Hackers have exploited vulnerabilities in file transfer software like MOVEit to access sensitive client data. There have been ransomware attacks that have exposed information from numerous clients across these and countless other firms. Without a doubt, these firms spend millions on cybersecurity, but attacks happen.

JPMorgan Chase, the financial behemoth, has 62,000 technologists working just in cybersecurity, one of the largest such staffs in the world. Despite those efforts, in February 2024, JPMorgan Chase reported that it discovered a data breach affecting the personal information of nearly half a million customers. According to a filing with the Office of the Maine Attorney General, Chase found a software glitch that allowed unauthorized access to specific data since August 26, 2021.

Lawyers, too, are entrusted with significant information about clients. The advent of GenAI has made more information available in more places, both internally and externally, and increased the possibility of breaches. It has also increased the likelihood of inadvertent disclosures by lawyers who do not protect information sufficiently when using AI and AI chatbots.

Lawyers can inadvertently expose confidential client information when using AI chatbots because these systems often store the data input into them. Worse, lawyers do not always heed the privacy policies of these bots and input confidential information without realizing that they are divulging client secrets. These actions could potentially lead to breaches of attorney-client privilege, where sensitive client details could be unintentionally shared or used to train the AI model without proper anonymization, putting the confidentiality of the client at risk.

Then there are the lawyers who are either stupid, or lazy, or ignorant, or some combination of the above. We have all heard about the New York lawyer who submitted a brief with multiple false citations. He initially claimed that he asked ChatGPT, the AI bot, if the citations were accurate, and when it said “yes,” he believed it. Of course, what he didn’t know was that the bot wanted to please him, so it lied.

The bot wasn’t taught what one colleague at Microsoft always tells his staff, “The bots can never lie to the customer. Once they do, you never recover credibility.”

The incidents continue to happen. And they don’t just happen to solo and small firm lawyers. Recently, a lawyer at one of the 50 largest firms in the country filed a brief with numerous false citations. When it was discovered, he withdrew the filing and said, “The cases cited in this Court’s order to show cause were not legitimate.”

He added, “Our internal artificial intelligence platform ‘hallucinated’ the cases in question while assisting our attorney in drafting the motion in limine. This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm.”

He concluded, “This serves as a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.”

This comes from a firm with over 1,000 lawyers. And he is the son of the founder of the firm.

After the incident, a memo circulated at the firm, telling the lawyers, “As all lawyers know (or should know), it has been documented that AI sometimes invents case law, complete with fabricated citations, holdings, and even direct quotes. As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified.”

Duh.

I check every citation in every brief I write, and my staff does the same.

The memo then added, “For example, this is no different from what we all learned in the first week of law school––case law must be checked to ensure that it is still good law. Just as failing to identify a reversal can result in court sanctions, the same applies to citing an AI-generated case that does not exist. It is your responsibility as a lawyer to ensure that all citations are verified and accurate. . . . [the] responsibility for verification rests with the researching attorney.”

Duh.

I still remember the fear that went through me every time I checked a citation in Shepard’s Citations. We didn’t have computers; we had the main book and what seemed like endless sub-volumes and pocket parts. We were never sure we got everything, but we did our best. And prayed.

Now, lawyers are lazy. Click here and there. Suddenly, you have a memorandum of law, and you think you are done.

GenAI is the modern version of Shepard’s. Point and click, and don’t worry about the details.

Details matter.

This brings us back to the study by Microsoft researchers.

The study reminds us that when a lawyer enters client details or other sensitive data into a chatbot, the information may be stored within the AI system, potentially accessible to others who use the same model, even if not explicitly intended to be shared. They must always be certain of the security of their data.

They also need to be reminded that AI bots are machines, no matter how they communicate and “act” like people. AI models cannot differentiate between confidential and nonsensitive information and treat all information input as part of the training set.

So, what can lawyers do individually and systemically to deal with AI and avoid becoming the latest headline in the legal news?

First, do your job. Check citations.

Sadly, lawyers look for shortcuts. It is natural to do, but the essence of being a good lawyer is making sure that you tend to every detail. If you are writing a brief or a memo of law, that means checking every detail, every citation and making sure they are accurate. If you are preparing a contract, read and reread it so that every detail is accurate. The little details are the difference between winning and losing. Do not cut corners.

Second, make sure that you are up to date on legal technology. Many lawyers take pride in claiming to be Luddites, as though it is a badge of honor. It is not. You wouldn’t go to a doctor who doesn’t order those “new-fangled” MRIs because they weren’t around when he was in medical school. You would get a new doctor. Fast.

So, it should be with lawyers. You do not need to know how to program a computer to stay up to date on legal technology. Cybersecurity, for example, is more often about knowing what to do rather than actually doing it. Without a doubt, that is or should be at the top of every lawyer’s mind all the time.

Third, balance the need for innovation against the risk you are willing to take. As lawyers, we must advise our clients about risks. We need to advise ourselves, or seek advice, about the prudent risks with all technology, not just AI.

Lastly, AI is a new frontier, representing a significant leap in technology and its applications across various fields. It has the potential to revolutionize the practice of law by streamlining processes and providing insights that were previously unimaginable. However, despite its impressive capabilities, AI is not a replacement for doing the legwork and the research.

AI can assist by processing large amounts of data quickly and identifying patterns, but it cannot replace the nuanced understanding and contextual knowledge that come from diligent research. Moreover, relying solely on AI without doing the necessary groundwork can lead to significant risks.

AI systems are only as good as the data they are trained on, and they can sometimes produce misleading or incorrect results if the data is flawed or biased. Therefore, it is crucial to validate AI-generated insights with traditional research methods. This ensures that the conclusions drawn are accurate and reliable. In essence, while AI can be a powerful tool to augment human capabilities, it should be used in conjunction with, rather than as a substitute for, comprehensive research and analysis.

    Author