January/February 2020

Hot Buttons

The ABA Tackles Artificial Intelligence and Ethics

Sharon D. Nelson & John W. Simek

The majority of lawyers don’t know a lot about artificial intelligence (AI). Everyone has heard the hype, few know the reality, and everyone groans when the words “robot lawyer” are spoken.

There may really be robot lawyers one day, but not any day soon. Still, we are being shaped by AI in ways we have not yet fully grasped.

ABA Resolution 112

On Aug. 12, 2019, the ABA House of Delegates passed Resolution 112, which states:

“RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”

The authors are delighted to see this resolution passed. Kudos to the ABA’s SciTech Section for bringing the resolution forward and forming a working group to address these issues.

The Report Supporting the Resolution

The report, which supports the resolution, is well worth reading. Though it is necessarily not all-inclusive, it points out that courts and lawyers need to be aware of the issues involved both in using and not using AI, including whether any AI used may be flawed or biased.

The working group to be established will define guidelines for legal and ethical AI usage. Possibly, it may come up with a model standard that will come before the ABA House of Delegates for adoption. The draft resolution, report and amended resolution as passed may be found at https://www.americanbar.org/news/reporter_resources/annual-meeting-2019/house-of-delegates-resolutions/112.

How is AI Being Used Today In the Practice of Law?

While there are many ways that AI is being used in the practice of law, the chief areas (thus far) are these:

  • Electronic discovery/predictive coding
  • Litigation analysis/predictive analysis
  • Contract management and analysis
  • Due diligence review
  • Detecting dangerous or bad behavior within an entity
  • Legal research

As AI advances, it may also help detect deception in the courtroom. While that startles lawyers when we describe that usage, the U.S., Canada and European Union have already conducted pilot programs using deception-detecting kiosks for border security.

It is inescapable that, in time, lawyers who do not adopt AI will be left behind by their peers.

What Does Ethics Have to Do With AI?

Revised Model Rule 1.1 of the ABA Model Rules (now adopted by 36 states) requires that lawyers need to be competent, and that includes keeping “abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” This means knowing about AI and knowing if the use of AI would be beneficial to the client—and of course having a basic understanding of how it works and its benefits and risks.

ABA Model Rule 1.4 involves the duty to communicate. This would include a conversation with clients about the decision to use AI in providing legal services and obtaining informed consent from the client. If the lawyer decides not to use AI, that may also need to be communicated. Consider that Model Rule 1.5 may come into play, requiring a lawyer’s fees to be reasonable. If AI can result in substantial savings to the client, it may be necessary to consider using it.

How about the confidentiality requirements of Rule 1.6? The use of AI may require confidential data to be shared with third-party vendors. How do you reasonably protect that data? You are going to have to know where the data is stored, how safe it is in transmission, who will have access to the data, etc.

Then there are Model Rules 5.1 and 5.3 regarding the supervision of lawyers and nonlawyers assisting in the provision of legal services. The scope of 5.3 encompasses nonlawyers—whether human or not. Did that catch you off guard? That means AI has to be supervised, and you need to understand the technology well enough to ensure compliance with your ethical duties.

Bias and Transparency: Another Ethical Issue

GIGO—IT folks translate that as “garbage in, garbage out”—applies to what data is fed to AI systems. For instance, if you feed historical court opinions to an AI program, won’t they reflect the biases of former times? What if the programmers or AI trainers transfer their own biases to the AI?

Where’s the ethical issue here? It is in Model Rule 8.4(g) which says it is professional misconduct to “engage in conduct that the lawyer knows or reasonably should know is harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.”

You may remember the 2016 disaster with Microsoft chatbot Tay, which was AI-powered. She responded to folks on Twitter and elsewhere and had personality elements. Because she learned and mimicked speech from the people she talked to, she was easy picking for internet trolls who fed her racist, homophobic and other offensive comments. Tay went from family-friendly to foul-mouthed and was pulled in less than 24 hours. Just that fast, AI went awry.

Imagine an AI chatbot on a lawyer’s website doing the same thing. Scary, huh?

In another incident in 2016, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software used by some courts in predicting the likelihood of recidivism in criminal defendants was demonstrably shown by ProPublica to be biased against African Americans. But no one knew how it worked—it was “proprietary,” so the company didn’t want to be transparent about its programming. This is called “black box” AI, in which no one can or will explain how the AI generates its output based on the input.

In 2014, Amazon developed a recruiting tool to help identify possible software engineers to hire. The system quickly began discriminating against women. The company finally abandoned it in 2017. Imagine a law firm using a tool like that for three years. The ethical and legal implications boggle the mind.

We are moving away from the “black box” model in favor of transparency. A number of laws have been passed prohibiting bias and requiring transparency. Lawyers will need to be familiar with those laws and with making sure their own use of AI conforms to those laws.

Europe Takes the Lead

As is often true, Europe is ahead of the U.S. in considering ethics and AI. In April 2019, the Independent High-Level Expert Group on Artificial Intelligence (set up by the European Commission) published Ethics Guidelines for Trustworthy AI. The guidelines indicate that trustworthy AI has three components:

  • It should be lawful, complying with all applicable laws and regulations;
  • It should be ethical, ensuring adherence to ethical principles and values; and
  • It should be robust, both from a technical and social perspectives, since, even with good intentions, AI systems can cause unintentional harm.
  • The guidelines set forth principles and values for AI (the language below is condensed):
  • Beneficence: “Do Good”
  • Nonmaleficence: “Do No Harm”
  • Autonomy: “Preserve Human Agency”
  • Justice: “Be Fair”
  • Explicability: “Operate Transparently”

Some concrete ethical requirements for AI include accountability, data governance (high quality and without bias), designing for everyone (including people with disabilities), human oversight of AI, nondiscrimination, respect for human autonomy, respect for privacy, robustness (secure, reliable and able to deal with errors and inconsistencies during the design and deployment of the AI), safety (to humans or the environment) and transparency.

The guidelines also addressed some critical concerns, such as identification without consent, covert AI systems (of which we suspect there are many in existence already), normative and mass citizen-scoring without consent in deviation of fundamental rights, and lethal autonomous weapon systems.

Although the guidelines are not currently legally binding, experts expect the frame proposed by the guidelines to become the foundation for a widely accepted standard in the development, use and governance of AI.

The ABA working group will of course seek to integrate the ethical rules for lawyers into the fabric of its own report.

Using AI Ethically

Is ethical AI even possible? That is a very good question. From tech giants like Google and Microsoft to AI start-up companies, many are creating corporate principles to make sure AI is designed and deployed in an ethical way.

But will all the public-facing promises be kept? Companies change their operations and policies all the time. Ideals end up being sacrificed for financial gain. Political pressure is applied. In the government and the military, orders are given.

“We don’t want to see a commercial race to the bottom,” Brad Smith, Microsoft’s president and chief legal officer, said on the Official Microsoft Blog. He added, “Law is needed.” Perhaps another mission for the ABA working group is to recommend the kinds of law that should be adopted.

Ed Walters: An Academic Offers His Reflections

As many readers will know, Ed Walters is the CEO of Fastcase, the esteemed legal research company, as well as an adjunct professor at the Georgetown University Law Center and at Cornell Tech where he teaches The Law of Robots. Walters is rather glad there is a movement to bring “unsexy” back to AI after all the unwarranted hype. As he points out, AI today performs many mundane but important tasks in law practice.

He readily acknowledges that the ABA is wisely pursuing a close look at AI and ethics. There recently has been a drive to reject AI that is not transparent about how it functions. And he notes, with his customary wry humor: “I am afraid of smart machines that make us stupid run amok.” And so are we.

Walters did an entire Legal Talk Network Digital Detectives podcast with us which contains much more content than we can convey here—with his usual keen wit and observations. You can find the podcast at legaltalknetwork.com.

Final Thoughts

One of our favorite quotes is from Jonathan Shaw, the managing editor of Harvard Magazine, who said of AI, “Nothing about advances in the technology, per se, will solve the underlying fundamental problem at the heart of AI, which is that even a thoughtfully designed algorithm must make decisions based on inputs from a flawed, imperfect, unpredictable, idiosyncratic real world.” As he notes, engineering can’t always fix such problems after an AI system has been designed, which is why ethical issues must be addressed during the design phase and thoroughly vetted before deployment.

If you haven’t read the book I, Robot (a compilation of stories by Isaac Asimov written between 1940-1950), this would be an excellent time to do so, while we are in the first stages of developing true AI. Asimov starts with AI’s infancy, enchanting us with a robot who takes care of a child and loves to hear stories from children’s books.

As the stories progress, they become more and more unsettling. By the end of the book, the future of humankind is a dystopian future indeed. Without giving away any spoilers, the authors fear that if we don’t seize the moment and grapple with ethics and AI, the bleak future portrayed by Asimov may be our own.

Sharon D. Nelson

Sharon D. Nelson, Esq. is a practicing attorney and the president of Sensei Enterprises, Inc. She is a past president of the Virginia State Bar, the Fairfax Bar Association and the Fairfax Law Foundation. She a co-author of 17 books published by the ABA. snelson@senseient.com.

John W. Simek

John W. Simek is vice president of Sensei Enterprises, Inc. He is a Certified Information Systems Security Professional and a nationally known expert in the area of digital forensics. He and Sharon provide legal technology, cybersecurity and digital forensics services from their Fairfax, Virginia, firm. jsimek@senseient.com.