chevron-down Created with Sketch Beta.
February 03, 2020 TECHNOLOGY

What Judges and Lawyers Should Understand About Artificial Intelligence Technology

By Judge Herbert B. Dixon Jr. (Ret.)

Before getting to the subject of this column, let us first address terminology. Many writers use the terms “artificial intelligence” (AI, as used in this column), “algorithm,” and “machine learning” interchangeably. However, from a purist point of view, these terms have different meanings, although the extent of those differences will vary among writers. For this column, think of “algorithm” as a simple instruction or set of instructions where given sets of inputs lead to a particular result. If an algorithm runs into unforeseen input or data not anticipated by the preprogrammed instructions, AI fills in the gap. AI mimics the human ability to make a decision based on all available information. Lastly, add the concept of machine learning (the ability of AI to take into account the accuracy of prior predictions and conclusions)—to achieve continuous improvement of the AI product. Accordingly, AI combines algorithms and machine learning as a substitute for the human brain to predict, analyze, forecast, and decide. Lastly, remember that AI technology, a product of human creation, can exhibit some of the imperfections of its human creators, a concern addressed in the closing section of this column.

Legal Research

The legal profession uses AI technology for research more than any other law-related application. Since the 1970s, lawyers and law students have had access to computerized legal databases maintained and updated in real time. Westlaw, LexisNexis, Google Scholar, Fastcase, and Ross Intelligence are among the most recognized names providing legal research tools. Each of these services provides multiple ways of using artificial intelligence to search law-related sources and documents (e.g., cases, statutes, law review articles, etc.). Their search methods include the use of natural language, Boolean operators (e.g., AND, OR, XOR, and NOT) and other parameters, and citations checking (e.g., Shepard’s). With this recognizable impact of AI within the legal profession, it is not difficult to appreciate numerous other law-related uses of AI technology.

Predicting Supreme Court Results

There have been numerous efforts to predict the outcome of pending Supreme Court cases. In one study, researchers used statistical models to predict the outcome of each case before its scheduled argument during the 2002 term.1 In a second simultaneous study, the researchers obtained a set of independent predictions from legal experts. The results? Well, the statistical model fared better than the experts predicting the outcome of cases that term. The model correctly predicted 75 percent of the court’s affirm/reverse results. The experts correctly predicted 59.1 percent.

In another study, AI researchers created a machine-learning, statistical model algorithm to predict the outcome of Supreme Court cases for each year from 1816 to 2015.2 The researchers designed the algorithm to look at all prior years for associations between case features and outcomes. After performing this analysis, the AI would then predict an affirmance or reversal of the lower court’s decision and predict how each justice voted. After securing each year’s predictions, the researchers updated the model to include the outcome of the cases, which allowed the program to learn from its previous predictions and move on to the next year. The results were impressive. From 1816 until 2015, the algorithm correctly predicted 70.2 percent of the court’s 28,000 decisions and 71.9 percent of the justices’ 240,000 votes.

Of note regarding the predictive capability of AI technology is the litigation analytics tool Context by LexisNexis, which provides guidance to lawyers in state and federal trial courts of a judge’s statistical record of rulings on similar issues (including cases cited and language quoted in the judge’s rulings). The goal of this tool is to help lawyers make arguments using language the judge is most likely to find persuasive—language used by the judge in his prior rulings. Other litigation analytics tools offering the assistance of AI technology include Westlaw Edge by Thomson Reuters, Bloomberg Law’s Litigation Analytics, and Gavelytics, which recently announced a partnership with CourtCall by which attorneys who are scheduled to appear before a judge via CourtCall’s remote-appearance technology may obtain analytics on that judge in advance of the appearance.3

Crime Prediction, Prevention, and Detection

Numerous service providers offer AI applications to businesses and governments for crime prediction, prevention, and detection. The theory behind the operation of these applications is familiar—namely, with enough data and other relevant information, the AI application can flag events and trends that historically precede or suggest the likelihood of criminal activity. The types of data and other information often included in such analyses include the occurrence of gunshots and AI evaluations of video camera recordings. After this analysis, the AI would identify suspicious anomalies in the movement of people, the presence of unattended objects in crowded venues, and even the presence of license plate numbers included in the AI database because of theft or reports of their presence at the scene or involvement in other criminal activity. In some instances, AI applications predict when or where new crimes are most likely to occur. These may include instances where, for example, the occurrence of multiple burglaries in one geographical area statistically correlates with burglaries in another geographical area, or other statistical data showing that certain crimes tend to cluster in times and locations.4

Judicial Bail and Sentencing Determinations

Judicial officers use AI technology in criminal cases to determine whether to detain an individual pretrial and the type of sentence to impose in the event of a conviction. In an earlier guest technology column for this magazine, Judge Noel Hillman wrote about the use of AI technology to gauge the risk of recidivism.5 Similarly, some states have begun investing in AI software to aid judges in bail determination decisions.6 Essentially, the AI software is a risk assessment tool to quantify the risk an individual poses for recidivism.

eDiscovery

As I wrote almost eight years ago,7 legal practitioners have moved to the use of computers and AI applications to search digital databases for electronically stored information that might be relevant to the underlying litigation. This process involves the use of sophisticated programs and algorithms by which a computer is trained during several staged interactions with a human reviewer to determine relevance. Moreover, a 2011 article8 that I referenced in my earlier writing is still cited for the proposition that technology-assisted review of electronically stored information is more accurate and efficient than manual review of documents by human hands. In the 2011 article, the authors determined that using technology-assisted review identified an average of 76.7 percent of the relevant documents, while approximately 15.3 percent of the documents retrieved were irrelevant, and that human review of every document averaged 59.3 percent of all relevant documents, while an average of 68.3 percent of the retrieved documents were irrelevant.

The use of AI technology in eDiscovery is not limited to written documents. AI natural language audio processing has been used to review telephone calls in Telephone Consumer Protection Act cases to filter 33,000 hours of audio recordings into 140 hours of relevant data for review by a human.9 In one proclamation about extreme cost savings using AI technology during the eDiscovery process, one vendor’s law-firm client reported that it was able to cut down from 100 redundant workers to a 95 percent cost savings. This author has not made an independent verification of these claims of time and cost savings.

Contract Writing/Review

AI has proven itself resourceful for automating tasks involving contracts. Law firms and other entities are often responsible for preparing and managing multiple contracts at a given time, leading to inefficiencies in revising, tracking, and sorting them. AI contract software is trained by subject-matter experts, after which the software identifies and extracts key data points within the contract and helps with organizing, managing, and monitoring them. While managing variations in a significant number of contracts is a major undertaking, AI contracting software has that capability, and the capability to record and standardize contract provisions, assess risk, and make it easier to identify instances of noncompliance at a faster rate than human reviewers.10 One study comparing the performance of artificial intelligence to human lawyers reviewing standard business contracts reported that AI technology achieved an average 94 percent accuracy rate ahead of human lawyers, who achieved an average accuracy rate of 85 percent.11

Final Thoughts About AI

Users should be concerned if AI software is inherently biased or reflects biases of its developers. The strengths and weaknesses of AI are that the software does exactly what it is programmed to do; a human must program the AI system at the outset and review the results produced by the AI system.12 For example, Amazon halted one of its AI projects, a recruiting tool to find the best employees, because those working with the project came to realize that the program was biased against women. Why? The sample data set was 10 years old—from a time when fewer women were in the tech field. The 10-year-old data resulted in bias within the software in favor of the male candidates.13 Concerning bail software, opponents have expressed concerns that these programs, instead of providing fairness in the system, create an opposite effect. One recent study found that the state-sanctioned bail program algorithm disproportionately benefited white offenders compared to offenders from other communities.14

A final concern comes from the lack of transparency by companies that create AI software. Transparency can help mitigate issues of fairness and discrimination and thus inspire confidence in the AI tool.15 Distrust can arise if developers are not transparent about the inner workings of their products or cannot explain why the AI came to a specific conclusion. For the foreseeable future, human attorneys are a necessary guardrail to protect the client. As noted in Comment 8 to Rule 1.1 of the ABA Model Rules of Professional Conduct, a lawyer should keep abreast of changes in the law and its practice, “including the benefits and risks associated with relevant technology.” That includes AI.

 

Judge Dixon wishes to thank Lisa Jiron, a second-year student at The George Washington University Law School, for her help preparing this article and assistance researching the subject of artificial intelligence.

Endnotes

1. Theodore W. Ruger, Pauline T. Kim, Andrew D. Martin & Kevin M. Quinn, The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decisionmaking, 104 Columbia L. Rev. 1150 (May 2004), https://bit.ly/2tjRcgY.

2. Daniel Martin Katz, Michael J. Bommarito II & Josh Blackman, A General Approach for Predicting the Behavior of the Supreme Court of the United States, 12 PLoS One e0174698 (Apr. 12, 2017), https://bit.ly/2sxg8RO.

3. Robert Ambrogi, This Tech Can Turn the Tables in Litigation, Above the Law (Dec. 3, 2018, 10:02 AM), https://bit.ly/358X4qn.

4. D. Faggella, AI for Crime Prevention and Detection—5 Current Applications, Emerj (Feb. 2, 2019), https://bit.ly/2SAhMNk.

5. Judge Noel L. Hillman, The Use of Artificial Intelligence in Gauging the Risk of Recidivism, 58 Judges’ J., no. 1, Winter 2019, https://bit.ly/369bGHK.

6. Cheryl K. Chumley, Freedom, in the Hands of an Algorithm, Wash. Times (Apr. 2, 2019), https://bit.ly/35aots3.

7. H.B. Dixon, Automating the Search and Review of ESI, 51 Judges’ J., no. 3, Summer 2012, at 36.

8. Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 11, 37, tab. 7 (2011), https://bit.ly/2SF8m2V.

9. Michael McDonald, Artificial Intelligence Can Reduce 99% of Review Hours, L. Tech. Today (Sept. 6, 2017), https://bit.ly/2tkjSGh.

10. Beverly Rich, How AI Is Changing Contracts, Harv. Bus. Rev. (Feb. 12, 2018), https://bit.ly/2QyMYtK.

11. Lawgeex, Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts (Feb. 2018), available at https://bit.ly/2F90K0F.

12. Timothy J. Carroll & Manny Caixeiro, Pros and Pitfalls of Artificial Intelligence in IP and the Broader Legal Profession, 11 Landslide, no. 3, Jan./Feb. 2019, https://bit.ly/36d7MgK.

13. Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, Reuters (Oct. 10, 2018, 10:12 PM), https://reut.rs/2u0RJo5.

14. Tom Simonite, Algorithms Should’ve Made Courts More Fair. What Went Wrong?, Wired (Sept. 5, 2019), https://bit.ly/2syKJyy.

15. Andrew Burt, The AI Transparency Paradox, Harv. Bus. Rev. (Dec. 13, 2019), https://bit.ly/369LKvq.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

Judge Herbert B. Dixon Jr. (Ret.)

Judge Herbert B. Dixon Jr. retired from the Superior Court of the District of Columbia after 30 years of service. He is a former chair of both the National Conference of State Trial Judges and the ABA Standing Committee on the American Judicial System and a former member of the Techshow Planning Board. You can reach him at [email protected]. Follow Judge Dixon on Twitter @Jhbdixon.