chevron-down Created with Sketch Beta.
February 03, 2020 Feature

Here There Be Dragons: The Likely Interaction of Judges with the Artificial Intelligence Ecosystem

By Fredric I. Lederer

Artificial intelligence, or “AI,” is frequently referenced both in the news and in commercial advertisements. It often appears that nearly everything is or soon will be a product of AI. In fact, however, other than natural language processing, true AI is still in its early stages and far less common than advertising would suggest. This is not to say that it is or will be unimportant. Rather, judges increasingly will be dealing with AI and related technologies, including the Internet of Things (IoT), Data Analytics, Blockchain, Cryptocurrencies, and the like, that we collectively refer to as the “AI Ecosystem.”1 Unfortunately, the AI Ecosystem can be immensely complicated. It likely will pose new challenges to the judges who will have to resolve the legal disputes that will stem from it and who will have to use the ecosystem in their daily work. In the medieval period, maps that included unexplored territory sometimes included the inscription Here There Be Dragons as a warning of the possible fearsome consequences of the unknown. Such an appellation is not unreasonable as we come to grips with the AI Ecosystem.

In its most basic form, AI is machine learning.2 An AI system continuously learns, modifying its programming to better accomplish its set goals. AI systems are sophisticated creations, and the possibility of substantial error is ever-present. The accuracy of an AI depends on its original programming, the quality of its training, and the quantity and quality of the data it uses. Training consists of exposing the program or algorithm to immense amounts of data, sufficiently labeled or described so that the algorithm later can compare unknown data to the rules it formulated based on the training data and draw conclusions from the new data. Training can be especially problematic as exposure to inadequate or misleading data can result in highly erroneous AI conclusions. Further, training data are put together by human beings, and implicit or accidental bias can result in biased training and ultimately yield error-ridden and even discriminatory results. In our AI work at the Center for Legal and Court Technology, we have discovered that lawyers and perhaps judges as well sometimes assume that AI is simply a complicated computer program and nothing more. That is incorrect! By their very nature, AI algorithms change constantly as they reprogram themselves. Further, an algorithm’s output or decision is entirely dependent on the data it uses. Sometimes those data are erroneous and/or biased, and that can alter the AI’s system in highly undesirable ways. Accordingly, not only is AI decision-making not transparent, but it may be impossible to determine how an algorithm reached a given conclusion.

To complicate matters, AI does not exist in a vacuum. As the expression “AI Ecosystem” suggests, AI is only a part of a massively interdependent network of technologies, data, and humans. An AI algorithm is likely to rely on real-world data obtained via the Internet, mostly by way of IoT devices, meaning data derived from sensors, medical devices, phones, watches, automobiles, washers and dryers, and nearly anything that is connected to the Internet. AI systems largely exist for the purpose of analyzing immense amounts of data and drawing conclusions from that analysis. Consider China’s planned social credit scoring. A person’s score will be the result of analysis of a vast amount of information, including that gained from the person’s interactions with others as determined by AI analysis of video data, including facial recognition, communications, and numerous other data sources. Consider that, as already noted, not only can data be erroneous, but human classification of those data may be biased, which can also generate erroneous AI results.3 And what data will be available? The world’s nearly limitless and often interdependent data will present pressing questions. The right to be forgotten might mean that Google and similar data vendors must remove or foreclose human access to certain information, but it is likely that no one will be able to know what data are being used by an AI system.

From a judge’s perspective, AI issues arguably can be divided into two categories: legal issues flowing from the use of AI and court use of AI for court purposes. A short caveat is in order before proceeding further: This article deals with AI. Many technology products, services, and results do not use real AI, but the lack of AI does not necessarily prevent similar or identical legal issues.

Legal Issues

Any discussion of AI and the law can posit delightful jurisprudential questions such as “To what extent should an AI be considered a legal person and for what purposes?,” a question that is perhaps foreshadowed by the law of corporations. On a related note, intellectual property issues are a significant area of current interest. Although a monkey cannot copyright pictures it took, who owns or should own a copyright for an AI-produced oil painting that sold for $432,000?4 Who can or will own a patent for a device designed “by” an AI?5 In August 2019, the U.S. Patent and Trademark Office requested public comments on matters related to AI invention, and a test case is now pending in multiple national patent offices.6

Most cases that will arise in the near term likely will be ones with classic issues, complicated by the nature of AI and its ecosystem. Perhaps the most obvious is tort liability for tortious damages caused by an AI system. The difficulty in determining how the “black box” AI reached the result it implemented (with or without human oversight) may make it impossible to determine causation, or, should the result be based on multiple erroneous data inputs, how to apportion damages. One can plausibly argue that contemporary tort law is sufficient as tort law has long dealt with similar questions. But the very nature of AI is problematic as the number of possible causes and the identity of the data points involved and data owners may be so large as to create qualitatively different problems than in the past. Tort law could cope with this via application of strict liability, noting, however, that the extent to which product liability per se extends to economic damages is not simplistic, but the impact this might have on developing technologies might be unacceptable. Discussion of how best to deal with injuries caused by self-driving cars has, for example, often suggested administrative regulatory systems that would move injury compensation outside the tort system. It may well be that certain types of AI injuries ultimately will be uncompensated and viewed as the unavoidable consequence of otherwise socially desirable improvements.

There are a vast number of other civil legal issues related to the AI Ecosystem. Are cryptocurrencies “securities” within the Security and Exchange Commission’s jurisdiction? What are the privacy implications of AI systems that use vast amounts of data in unexpected ways? To what extent should AI Ecosystem manufacturers be liable for cybersecurity flaws that permit “hacking”? Would the nuisance theory now being used in the opioid litigation permit a successful suit against a company that knowingly sold home devices without “adequate” cybersecurity protection, which devices allowed hackers to penetrate a home or business network for criminal purposes? What should be the result if the tortfeasor crashed a city or regional electrical grid via that cyber weakness? Would it matter if the company included a warning label for the buyer to the effect that its product did not include cybersecurity to make it less expensive?

In the area of criminal law, the National Institute of Justice has announced that it views the primary criminal law applications to be “public safety video and image analysis, DNA analysis, gunshot detection, and crime forecasting.”7 We are only now beginning to appreciate the potential effect that the AI Ecosystem will have on the law of search and seizure. On the one hand, the use of AI-based surveillance raises critical issues. Is AI-augmented facial recognition based on images captured from street-mounted cameras violative of the Fourth Amendment’s prohibition of unlawful searches and seizures? The traditional answer presumably would be, “Of course not; the person was in public and anyone can capture an image; the person did not have a reasonable expectation of privacy.” But the traditional answer does not take into account the ability of AI to correlate that data with IoT data captured from literally millions of sources.

Imagine a self-driving car operated by a central AI system that also is responsible for all the other automobiles in the area. If police are following in a police sedan connected to the IoT network and observe that the self-driving car is exceeding the speed limit, can the police stop the speeding car when it is under computer control and its behavior and cause can be monitored and corrected directly via reference to the central computer? If we speak in traditional terms, would there be probable cause or some lesser constitutional cause to stop the vehicle? The “perpetrator” would not be the driver but rather the central computer. If there is no justification or need to stop the vehicle, what is the general effect on law enforcement given that today police often use vehicle offenses as justifications for broader subterfuge searches? For that matter, what would be the effect on police employment?

As these issues reach the courts, the difficulty for some judges will be lack of technological knowledge and understanding. There should be little need for judges to learn how to code, but a significant understanding of the basics of our cyber world will be required of many. We likely will be in much the same situation as when the Supreme Court decided Daubert,8 and judges became the validity arbiters of science, medicine, and technology. For many judges, however, the most direct effect of AI and its ecosystem will be the use of AI for court administration and case resolution.

Court Administration and Case Resolution

AI can be used potentially to assist in case scheduling and case management. In the most extreme variation, imagine a court AI that, due to its connection to the Internet and IoT devices, has everyone’s detailed real-world living details available and can automatically schedule a traffic case, for example, or witness testimony in between picking up the kids from school and a rescheduled medical appointment. Of course, that raises major privacy concerns, but it is unclear whether privacy in a traditional sense will survive the AI Ecosystem.

Some court data procedures may benefit from AI. Effective conviction expungements are nearly impossible given how far data travel today. An AI system might be able to find and negotiate at least some limits on sharing those data.

Legal research is already benefiting from AI. LexisNexis and Westlaw have announced AI-based capabilities. ROSS Legal Research proclaims, “We’re building the world’s best legal research system powered by artificial intelligence.”9 Of course, getting an answer to a legal research question without doing the research forecloses the serendipity that often creates new insights.

Phillip Knox and Peter C. Kiefer reported in 2018 that their 2015 survey of court professionals showed that there was significant doubt about the use of technology to assist judges.10 Notwithstanding this, perhaps the best-known, and most controversial, use of AI by judges has been the use of AI tools that are marketed to help predict future criminal misconduct for use in pretrial release and sentencing decisions. In Loomis v. Wisconsin,11 the state supreme court sustained the use by the trial court of the proprietary COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system despite the vendor’s use of the trade secret privilege to prevent analysis of the algorithm and substantial allegations that the system was racially biased. The court’s primary justification for its decision appears to be that the trial judge sentenced the defendant rather than the AI system and that the COMPAS information was only one factor in the sentencing decision. A reader of the opinion might, however, infer another possible justification, one with important AI policy aspects. To put it charitably, human sentencing is imperfect. Inconsistent and even arbitrary sentencing has been troublesome. Some jurisdictions such as the federal courts have created sentencing guidelines in the hope that they might improve sentencing quality, and those “improvements” themselves have been criticized. Technologically augmented, meaning AI Ecosystem augmented, sentencing holds the possibility of eventually bringing us a better, bias-free, sentencing procedure. But we will not get there if we bar the beginning uses of AI because it is imperfect, especially given the already highly fallible imperfect biased sentencing done by humans. To what degree should we tolerate even probable technical error if customary human behavior likely is even worse? And that painful point brings us to the key question ordinarily addressed to AI court use: Can and should AI systems be used to adjudicate cases?

It seems clear that the use of the AI Ecosystem for at least some types of case resolution is possible and arguably desirable. Administrative law with its highly specialized case types that are often complex and data-rich may be especially likely candidates. Courts of first instance, in particular, often have large dockets of relatively minor cases. Not only do these cases place a burden on the court, inability to retain counsel means that large numbers of litigants are self-represented. Small claims cases and specialized dockets such as evictions may prove to be good cases for AI determination. A good online dispute resolution (ODR) system could handle many of these cases quickly and efficiently, even without AI. Naysayers immediately would argue that such resolution would be inferior to a decision made by a human judge assisted by counsel for the parties. That may well be correct, but American reality is that we are highly unlikely to supply such parties with free counsel. Imagine that you are a tenant about to be evicted and cannot afford counsel. What would you prefer: going to court unrepresented, often with a very heavy and fast-moving docket with the landlord’s lawyer opposing you, or having your case adjudicated by an impartial AI?

There are, of course, multiple reasons to limit the use of AI for actual adjudication. No matter how questionable it may be as a matter of science, our legal system prizes the assumed ability of human fact finders to determine and use demeanor evidence. It is hard to see how we could use an AI to make such decisions. An equally critical note is the ability and responsibility of judges to interpret and make law. At least at present, AI systems can only operate on the basis of existing rules. If we were to create an AI judge, significant new legal rules would not be possible. Rather, we would be bound by statutes and existing precedents. Absent statutory change, we would need human judges if we want the ability to break with past precedents.

Conclusion

The AI Ecosystem will present judges with problematic opportunities and challenges. Yet we should always recall that in older days, despite the warnings of Here There Be Dragons, careful, courageous, and innovative mariners became successful explorers, and none of them ever encountered an actual dragon. We can hope for no less from our judges.

 

The author acknowledges the support of his faculty partners in this continuing AI Ecosystem venture, Professor Iria Giuffrida of William & Mary and Associate Dean and Professor Nicolas Vermeys of the University of Montreal and, as a visiting professor, William & Mary. Our AI Ecosystem work is made possible by the Silicon Valley Community Foundation grant funded by Cisco Systems.

Endnotes

1. See generally Iria Giuffrida, Fredric Lederer & Nicolas Vermeys, A Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence, the Internet of Things, Smart Contracts, and Other Technologies Will Affect the Law, 68 Case W. Res. L. Rev. 747 (2018).

2. There are different forms of AI, including sophisticated neural networks. The key here, however, is that true AI systems reprogram themselves—they “learn.”

3. See, for example, Amazon’s facial recognition software misidentified (matching 28 members of Congress with criminal mugshots).

4. Natashah Hitti, Christie’s Sells AI-Created Artwork Painted Using Algorithm for $432,000, de zeen (Oct. 29, 2018), https://www.dezeen.com/2018/10/29/christies-ai-artwork-obvious-portrait-edmond-de-belamy-design (“Portrait of Edmond de Belamy”).

5. Martin Coulter, Patent Agencies Challenged to Accept AI Inventor, Fin. Times, July 31, 2019.

6. Jared Council, Can AI Receive a Patent, Wall St. J., Oct. 14, 2019, at R9.

7. Christopher Rigano, Using Artificial Intelligence to Address Criminal Justice Needs, NIJ J., no. 280, Jan. 2019, at 1, 2, https://www.ncjrs.gov/pdffiles1/nij/252038.pdf.

8. Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).

9. About Us, ROSS, https://rossintelligence.com/about.html.

10. Phillip Knox & Peter C. Kiefer, Future of the Courts: The Next Ten Years Combined Survey Results Comparing NACM Members with Overall Responses 41 (Oct. 31, 2018), https://nacmnet.org/wp-content/uploads/Trends-Kiefer-and-Knox.pdf.

11. 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017).

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Fredric I. Lederer

Fredric I. Lederer is chancellor professor of law and director of the Center for Legal and Court Technology (CLCT) at William & Mary Law School. The mission of CLCT, a joint initiative of William & Mary and the National Center for State Courts, is to improve the administration of justice through appropriate technology, including artificial intelligence, the Internet of Things, and related technologies.