February 03, 2020 Feature

A Call to Action: Litigating and Judging Artificial Intelligence Cases

By Michael Arkfeld

We hear the endless drumbeat of artificial intelligence (AI) and feel the monumental impact AI is having on every aspect of our lives. Within the business world, AI is now making its way into how we recruit, select, and retain talent; design and deliver our products and services; interact with our customers; and drive business innovation. In government, we see that AI is being used to predict recidivism, select jurors, calculate social benefits, and rate teachers’ performance in the classroom. No sector of our economy has escaped the influence of AI.

But what happens when AI discriminates, injures, or monopolizes? It then becomes the subject of discovery and litigation. What are the legal issues from preservation to production to trial when the AI software is continually evolving and subject to an ever-changing set of inputs and ephemeral algorithms?

Though AI has been discussed and developed since the 1950s, what is new is the computational power that has markedly increased both the capability of processing data and the availability of training “data” and “big data” that leads to practical breakthroughs in AI. These two factors, coupled with complex algorithms, have resulted in beneficial outcomes for areas such as medical diagnoses and self-driving vehicles among many other AI applications that number in the tens of thousands.

This article will not address the use of AI and its effect on the practice of law. We are surrounded by a daily barrage of stories that AI legal applications will allegedly replace judges and lawyers and result in a legal profession revolution. Though overstated, in my opinion, AI will have and has had an effect on many judicial and law firm functions, including eDiscovery/document review, legal research, summary/insight/predictive tools, billing, contract development, and other useful tools. The efficacy of these AI applications is covered in many review articles available on the web.

This article will take on AI from the critical perspective of initially defining and understanding AI, and its future. I will examine beneficial and abusive AI applications and the ethical and sanctionable consequences attached to not understanding AI. Finally, I will address the legal (such as preservation, etc.) issues involved for judges and practitioners involved in litigating this transformational technology.

What Are AI and Its Components?

There is no general consensus on the definition of AI. Generally, “artificial intelligence” is the term used to describe how computers can perform tasks normally viewed as requiring human intelligence, such as recognizing speech and objects, making decisions based on data, and translating languages. There is clear evidence that AI mimics certain operations of the human mind.

In addition, an important characteristic, and part of many AI applications, is “machine learning,” in which computers use algorithms (rules) embodied in AI software to learn from data and adapt with experience.1 In effect, the AI algorithm, based on new data, will change the algorithm automatically and produce a different outcome. This capability obviously raises major legal issues involved with preservation of algorithms and the data used to power the AI application that are challenged as being harmful.

How Does AI Work?

In its simplest form, after problem formulation,2 AI relies on data that are input into a computer, where a coded algorithm computes the data and produces an outcome. An algorithm is a sequence of instructions that are used to form a calculation, process data, and perform automated reasoning and other tasks and then produce an outcome.

These algorithms can be written by humans, or, with sufficient AI ability, a computer system can create its own algorithms in order to accomplish goals set by the master algorithms.3

The critical legal issue involved in all AI applications is whether a verifiable outcome is based on accurate and sufficient data and has been tested to determine whether the algorithms used to execute and produce the outcome are programmed correctly. The AI results must be transparent, not opaque, so that the outcomes are “unbiased, trustworthy, and fair.” 4

Big Data

Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytic methods that extract value from data, and seldom to a particular size of data set. “There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem.” Analysis of data sets can find new correlations to spot business trends, prevent diseases, combat crime and so on. . . .

Data sets grow rapidly, [in part] because they are increasingly gathered by cheap and numerous information-sensing . . . devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.5

There is a need for “accurate” and sufficient data for AI purposes; otherwise the outcome may not be valid.6

For example, if you choose to determine political affiliation based on the hair color of an individual, the data input into the AI algorithm will determine the outcome, assuming the algorithm is programmed correctly. If the data set inputted into your algorithm consists of 10 people with black hair and your dataset shows that of the 10 people 9 are Republicans, then the “predictable” outcome would be that 90 percent of people with black hair are Republicans. However, if your dataset is enlarged to 100,000 or 1,000,000 people, then the outcome should be quite different and more accurate.

This simplistic AI example would obviously become much more complex if we were to add in more attributes for individuals, rendering the algorithm more complex in order to compute these additional attributes. Say, for instance, we include religious affiliation, social media activity, sports fan affiliation, arts supporter, age, gender, race, and magazine subscriptions among some of the 5,000 points of contact per individual that some commentators assert are available. This would obviously, depending on the quality of the data and complexity of the algorithms, provide a more predictive assessment of whether an individual is a Republican.

The Future of AI and Its Development

As noted, all business sectors, professions, and government agencies are on the AI journey. Some have traveled down this path further than others, but make no mistake that AI is integrating itself into all aspects of our lives. Whether it be autonomous cars, manufacturing and distribution AI robots, or AI assistants to handle routine tasks on our behalf, AI applications will continue to proliferate.7

AI will result in job displacement. Initially, this will occur with jobs that are routine, and as AI advances, it will replace additional jobs. In this regard, however, it is important to note that presently AI applications are limited by their absence of creativity and compassion. In addition, and of utmost importance, is the vanishing privacy for most individuals and collective groups. Whether it be Cambridge Analytica’s impact on prior elections, eavesdropping by Amazon’s Alexa, or the accumulation of huge personal profiles, declining privacy will continue to have a significant impact on human rights.

Though initially labeled as AI, many applications are no longer classified as AI because they have become quite common. For example, the automatic correction of words in our word processing documents is AI but not labeled as such.

Without a doubt, many of these applications have successfully resulted in a substantial increase in the investment in AI applications. IBM alone has created over 20,000 AI applications for its customers.8

Some experts claim that AI will replace lawyers and will take over many legal tasks.9 Granted, AI has the ability to “augment” the workload of lawyers or any person’s occupation by making certain tasks easier to perform, such as technology-assisted review (TAR) of electronically stored information. However, most AI luminaries argue that we are far from the development of neural networks that will replace the judgment and reasoning ability of humans, including judges and lawyers.

Where Is AI Developed and Deployed?

As expected, AI development is primarily centered within academia, industry, and the military.10 There has also been a push to make available the underlying foundational computer code for many AI projects through the nonprofit OpenAI.11

Ethics and Sanction Consequences for Failing to Understand AI Issues

Judges and attorneys today are now faced with new, but extremely important, challenges regarding AI and their numerous applications. Perhaps this was a motivating factor behind the ABA’s recent promulgation of Resolution 112, regarding AI:

RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.

This resolution, when coupled with the ABA’s Comment to the competency rule, has sounded an ethical alarm to judges and practitioners. Model Rules of Professional Conduct R. 1.1, Competence, provides:

A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.

The ABA’s Comment 8 to the competency rule states:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology (emphasis added). . . .12

In addition, the California State Bar Association has set the “gold standard” for an “attorney’s ethical duties in the handling of discovery of electronically stored information,” which would include data and algorithms in an AI case.13


As noted, AI is dependent upon data and algorithms in order to provide an outcome. However, if the outcome discriminates, injures, or causes other types of harm to an individual or organization, then a party may face a potential lawsuit. When faced with a potential lawsuit, attorneys on both sides have a responsibility to inform their clients of the duty to preserve and disclose responsive hard-copy documents and “electronically stored information” (ESI).14

Evidence, both ESI and paper, must be preserved when a party knows or reasonably should know that the evidence is relevant to actual or potential litigation, is reasonably likely to be requested during discovery, and/or is the subject of a pending discovery request.

The failure of a client to preserve evidence has led to unprecedented sanctions from the courts.15

Beneficial and Abusive AI Outcomes

Beneficial AI Outcomes

AI is used in all human endeavors. Some critical and beneficial AI uses include employment matters (recruiting. selection, performance evaluations, etc.), medical diagnoses, language translation and natural language processing, courts (sentencing and bail, pretrial release, policing, and juvenile cases among others), and many, many other AI applications. This list is far from exhaustive but serves as an indication of the ubiquitous nature of AI applications.

Negative AI Outcomes

Though the benefits can be immense, AI, like most other technologies, is a double-edged sword. There can be, and have been, significant negative implications and outcomes to our human, physical, property, and digital rights.

The current abuse of AI technologies is one of the critical issues that the legal profession must be willing to address. Time is of the essence as continued development of AI rapidly escalates and we must advocate on behalf of those who are harmed by abusive AI practices. These abuses can originate in all facets of our lives. Daily, headlines point out the pitfalls and harm caused by AI. However, the lack of court decisions involving AI matters is alarming and notable, especially in light of the secret algorithms and questionable data used in some of these applications. The examples below document some of the negative outcomes of AI applications.

Teacher Evaluations

In Houston Federation of Teachers, Local 2415 v. Houston Independent School District, the court rejected the use of “privately developed algorithms to terminate public school teachers for ineffective performance on due process grounds.16


The court held it was not a violation of defendants’ due process rights to utilize COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is a “risk-need assessment tool to provide decisional support for judges in criminal matters.”17 However, an important independent report claimed that this computer program used by courts for risk assessment was biased against black prisoners. The program, COMPAS, “was much more prone to mistakenly label black defendants as likely to reoffend—wrongly flagging them at almost twice the rate as white people (45 percent to 24 percent).”18

Home Health Care Budgets

The court held that the unreliability of the “budget tool,” involving AI automatic spreadsheet calculations used to prepare home health aide budgets, was unreliable and arbitrarily deprived participants of their property rights and hence violated due process.19

Terrorist Analysis

The court found that Facebook’s algorithm “matching” users who had demonstrated any interest in Hamas or in terrorism to other Facebook users with similar interests was not considered to be content development such that Facebook had provided material support to terrorists.20

Challenges to Judging and Attorneys Litigating AI Cases

In traditional litigation matters, the focus has been, and will continue to be, on employment issues, automobile accidents, contract breaches, criminal matters, domestic relations, and many other cases that involve individuals, businesses, and organizations.

With the rise of AI, new, as well as traditional, forms of litigation outcomes/harms are being visited upon many potential plaintiffs, oftentimes without their knowledge. Without question, AI enables networked computers to perform highly complex AI tasks, but it also raises challenging new legal liability issues.

Whether one is judging, prosecuting, or defending against a cause of action based on an AI negative outcome matter, there are common and novel legal issues that must be addressed. These AI issues include the following.

Who to Sue?

One of the perplexing questions as we continue down the path of AI is who is responsible for the harm or bad outcome from the use of AI? In other words, who is at fault?21

For example, if a self-driving car is involved in an accident involving a pedestrian, then who should be sued? Should it be the manufacturer of the automobile, the programmer who developed the algorithms that essentially drove the car, the owner of the vehicle, the entity involved in testing the algorithms, or the programmer who coded the algorithms that automatically changed depending on the input of new data?

In another example, an employment discrimination matter, should the job-listing website be responsible for harmful algorithms? A third party who developed the algorithms for screening applicants? The data provider of the testing data that “trained” the algorithm?

These types of questions will become more prevalent as AI litigation increases.

What Is the Basis of Liability?

Generally, the basis of liability will rely upon traditional liability concepts such as22


  • Negligence,
  • Products liability,
  • Breach of warranty,
  • Contract breach, and
  • Others.

How Do You Prove or Disprove AI Liability?

As previously noted, the critical legal issues involved in all AI applications are whether the verifiable outcome is based on accurate and sufficient data and whether the algorithms used to execute and produce the outcome are programmed correctly. Therefore, the training data, input data, algorithms, and outcomes must be analyzed to determine whether or not the AI has produced a harmful outcome.

For example, applicants for a specific employment position can be discriminated against based on race, gender, religion, or other grounds without being aware of the discrimination. Oftentimes, this discrimination occurs based on data that are unknown to the applicant, and may very well be false.

An attorney and judge will need to understand the data that form the basis for the AI decision-making outcome, as well as algorithms that are designed to qualify or disqualify individuals from an employment position.

Judges and attorneys in these cases will be required to address several important issues such as which algorithms were chosen and whether or not they were tested by an independent third party; what data were selected for the training; how the algorithm was validated and tested; and how old the algorithm is and if it has been recently tested.23

This a sampling of the questions that should be the focus of discovery while implementing initial disclosures, interrogatories, and depositions.


Today, it is a litigant’s duty to preserve “electronically stored information” (ESI), documents, or physical evidence pertaining to a regulatory compliance notice, pertaining to “reasonably anticipated” litigation, or pursuant to a statutory requirement. This is often referred to as a “legal hold.”24

As we know, over the last 30 years the traditional forms of evidence—analog paper, audio, and videos—have given way to digital emails, texts, audio, videos, social media, databases, and so forth. In AI matters, the underlying data (which may be voluminous) that are inputted into the algorithm for an outcome may be of a temporary ephemeral nature. If temporary or ephemeral information is relevant and responsive to anticipated litigation, a party is under an obligation to preserve this ESI.25 It is almost certain that the preservation of data that are inputted into an AI application will present challenging problems for judges and practitioners.

Scope of Discovery and Protective Orders

As we are aware, in federal cases, and similar state rules, Federal Rule of Civil Procedures 26(b)(1) provides that a party “may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case. . . .”

Generally, the AI “black box” algorithm and data are relevant, but may be objected to by the disclosing party as trade secrets.

However, the scope of discovery, trade secret data, and algorithms can be protected by a protective order from the court based on Federal Rule of Civil Procedure 26(c), which specifically provides that “[t]he court may, for good cause, issue an order to protect a party or person from annoyance, embarrassment, oppression, or undue burden or expense, . . . including . . . that a trade secret . . . not be revealed or be revealed only in a designated way.” Problems relating to trade secrets are often addressed through an appropriate confidentiality agreement and/or protective order.

As courts and litigants are aware, there are other limitations/protections regarding eDiscovery including objections based on relevancy and overbroad concerns, not reasonably accessible, burdensome, and proportionality that may be applicable in an AI case.26


In the last 20 years of writing and speaking on electronic discovery and evidence, I have sounded the alarm on the transformation from analog to digital evidence. Though I have strongly advocated for legal professionals to understand, discover, and use digital evidence in their practice, the not-so-recent incorporation of artificial intelligence into all facets of our lives has caused me to raise the alarm yet again.

It is critical to the American justice system that attorneys and other legal professionals are competent and prepared to litigate matters where discrimination or other social injustices based on inaccurate data or prejudicial or biased algorithms have occurred. Without this preparation from the legal profession, we leave our society and ourselves at tremendous risk.


1. Lauri Donahue, A Primer on Using Artificial Intelligence in the Legal Profession, JOLT Dig. (Jan. 3, 2018), https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession.

2. Samir Passi & Solon Barocas, Problem Formulation and Fairness, Cornell Univ. (Jan. 8, 2019), https://arxiv.org/abs/1901.02547.

3. Cade Metz, Building A.I. That Can Build A.I., N.Y. Times (Nov. 5, 2017), https://tinyurl.com/yafxsth4.

4. Andy Thurai, It Is Our Responsibility to Make Sure Our AI Is Ethical and Moral, aitrends (Jan. 15, 2019), https://tinyurl.com/twwen4g.

5. Big Data, Wikipedia, https://en.wikipedia.org/wiki/Big_data (last visited Oct. 17, 2019) (quoting Data, Data Everywhere, The Economist (Feb. 25, 2010)).

6. Robins v. Spokeo, Inc., 867 F.3d 1108 (9th Cir. 2017), cert. denied, Spokeo, Inc. v. Robins, 2018 U.S. LEXIS 850 (Jan. 22, 2018) (circuit court found plaintiff had standing to challenge an inaccurate consumer data report).

7. Mike Thomas, The Future of Artificial Intelligence, Built in (June 8, 2019; updated Aug. 1, 2019), https://builtin.com/artificial-intelligence/artificial-intelligence-future.

8. For a listing of many of these projects, see List of Artificial Intelligence Projects, Wikipedia, https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects (last visited Oct. 5, 2019).

9. Steve Lohr, A.I. Is Doing Legal Work. But It Won’t Replace Lawyers, Yet., N.Y. Times (Mar. 19, 2017), https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html.

10. Artificial Intelligence, Wikipedia, https://en.wikipedia.org/wiki/Artificial_intelligence (last visited Oct. 10, 2019); Louis Columbus, McKinsey’s State of Machine Learning and AI, 2017, Forbes (July 9, 2017), https://tinyurl.com/yyusmku2; Thomas, supra note 7.

11. See OpenAI, https://openai.com/about (last visited Sept. 17, 2019).

12. Jamie Baker, Beyond the Information Age: The Duty of Technology Competence in the Algorithmic Society, 69 S.C. L. Rev. 557 (2018), available at SSRN: https://ssrn.com/abstract=3097250; Robert Ambrogt, 31 States Have Adopted Ethical Duty of Technology Competence, LawSites (Mar. 16, 2015), https://tinyurl.com.

13. State Bar of Cal. Formal Op. No. 2015-193, available at Ethics & Technology Resources, State Bar of Cal., https://tinyurl.com/y8cal5gr (last visited July 1, 2019).

14. Michael R. Arkfeld, Arkfeld on Electronic Discovery and Evidence, § 7.9(A)(1), Preservation Obligation (4th ed. 2019) [hereinafter Arkfeld on eDiscovery and Evidence].

15. Dan H. Willoughby, Rose Hunter Jones & Gregory R. Antine, Sanctions for E-Discovery Violations: By the Numbers, 60 Duke L.J. 789 (2010), https://tinyurl.com/yytls4e4; Hon. Shira A. Scheindlin & Kanchana Wangkeo, Electronic Discovery Sanctions in the Twenty-First Century, 11 Mich. Telecomm. Tech. L. Rev. 71 (2004).

16. 251 F. Supp. 3d 1168, 1179 (S.D. Tex. 2017). See also AI Now Inst. et al., Report, Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems (Sept. 2018), https://ainowinstitute.org/announcements/litigating-algorithms.html.

17. State v. Loomis, 881 N.W.2d 749, 752 (Wis. 2016), cert. denied, Loomis v. Wisconsin, 2017 U.S. LEXIS 4204 (June 26, 2017).

18. Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchne, Machine Bias—There’s Software [COMPAS] Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

19. K.W. v. Armstrong, 180 F. Supp. 3d 703 (D. Idaho 2016).

20. Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).

21. Iria Giuffrida, Fredric Lederer & Nicolas Vermerys, A Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence, the Internet of Things, Smart Contracts, and Other Technologies Will Affect the Law, 68 Case W. Res. L. Rev. 747 (2018), https://scholarlycommons.law.case.edu/caselrev/vol68/iss3/14.

22. Id.

23. Brian Higgins, Civil Litigation Discovery Approaches in the Era of Advanced Artificial Intelligence Technologies, Artificial Intelligence Tech. & L. (May 18, 2019), http://aitechnologylaw.com/2019/05/discovery-approaches-ai-technologies.

24. Arkfeld on eDiscovery and Evidence, supra note 14, § 7.9, Legal Hold and Sanctions.

25. Arista Records LLC v. Usenet.com, Inc., 608 F. Supp. 2d 409, 432 (S.D.N.Y. 2009) (court ordered party to preserve ephemeral data); Columbia Pictures Indus. v. Bunnell, No. 06-1093, 2007 U.S. Dist. LEXIS 46364 (D. Cal. May 29, 2007), aff’d, 245 F.R.D. 443, 446 (D. Cal. 2007) (court ordered the preservation of ephemeral IP addresses); Convolve, Inc. v. Compaq Computer Corp., 223 F.R.D. 162 (S.D.N.Y. 2004) (court denied sanctions based on failure to preserve temporary “wave forms”).

26. Arkfeld on eDiscovery and Evidence, supra note 14, § 7.4, Production and Protection of Case Information.


By Michael Arkfeld

Michael Arkfeld is a litigator, speaker, and author on electronic discovery, digital evidence, and artificial intelligence. He can be reached at michael@arkfeld.com.