chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo May/June 2024: The Changing Face of Evidence

Admissibility of Electronically Stored Information and Other New Technology

Kevin James Doran

Summary

  • What makes a new method or technology reliable enough for a judge to allow its presentation to a jury? How does one prove this reliability?
  • An attorney bringing a new kind of technology as evidence needs to think about how to categorize this evidence because this determines which of the rules of evidence apply.
  • So long as electronically stored information (ESI) complies with the rules of relevance, privilege, and discoverability, it can be authenticated by traditional means.
  • Who conveys the information in a new piece of technology can determine which rules the court will apply.
Admissibility of Electronically Stored Information and Other New Technology
Monty Rakusen via Getty Images

Jump to:

Evidence can be anything. Any item, testimony, fact, idea, or information can be offered as evidence so long as it makes the existence of a fact more or less probable to a jury. The Federal Rules of Evidence and traditional common law determine whether or not evidence can be admissible; they deal with traditional situations such as eyewitness testimony or how to consider established facts or written records. However, technological innovations now come about every day, providing entirely new ways to record, store, or otherwise extrapolate pieces of evidence that prove or disprove how an occurrence happened. How does one prove the reliability of social media posts or recordings from a smart home device? In the very near future, evidence will be conceived and generated by machines, with minimal to no human involvement. Where do you begin to convince a judge to allow evidence generated by artificial intelligence?

For the most part, the answer is reliability. Most limitations imposed on what can be admitted go back to an examination of why any given evidence is reliable. What makes a new method or technology reliable enough for a judge to allow its presentation to a jury? How does one prove this reliability?

Admission of Evidence under the Federal Rules of Evidence

Before considering reliability, attorneys must consider certain preliminary factors for admissibility for any piece of evidence. Broadly speaking, evidence can be divided into four categories: (1) real evidence, (2) testimonial evidence, (3) documentary evidence, and (4) demonstrative or illustrative evidence. Real evidence covers tangible objects related to the occurrence that can be physically handled and inspected. Testimonial evidence comes from a sworn witness testifying under oath. Documentary evidence covers anything “written down,” including papers, records, reports, books, emails, or written statements and communications; documentary evidence must be authenticated by some means, depending on where it came from. Finally, demonstrative or illustrative evidence supplements other forms of evidence to illustrate complex topics and aid the witnesses (usually experts) in explaining key facts to the jury. Common demonstratives include graphs, charts, diagrams, animations, and timelines.

An attorney bringing a new kind of technology as evidence into a criminal or civil trial needs to think about which category this evidence falls under because the kind of evidence determines which of the rules of evidence apply to its admission or exclusion.

Cornerstones of Admissibility and Discoverability

Any analysis of introducing evidence in U.S. courts begins with the rules of evidence. We will focus on the Federal Rules of Evidence, which codified the common law of evidence in 1975 and have been periodically amended to update the rules with more guidance on new technology. The most recent amendments took place in 2017 and reflected changes that considered digital technology as evidence; it now contains several rules dealing with electronically stored information (ESI).

At the most basic level, all evidence must be relevant (Fed. R. Evid. 401). Additionally, judges can exclude relevant evidence that could create undue bias, confuse the issues, mislead the jury, waste time, or needlessly repeat facts already in evidence (Fed. R. Evid. 403).

While all evidence creates some level of prejudice for the other side, opposing counsel could always argue the new piece of technology is so new and cutting edge that it would take up too much time to get the jury up to speed (undue delay, wasting time). Would you need to bring in an entire new class of experts to testify as to its reliability? The court could see it as a needless complication, especially if there are other, more traditional (perhaps more comfortable) sources of the same information. Additionally, only non-privileged evidence (Fed. R. Evid. 501, 502) is admissible, so any electronic communications covered by privilege, such as texts between spouses or a video recording of a videoconference between a client and an attorney, would not be admissible, regardless of its relevance.

Beyond determinations of relevance and privilege, an attorney needs to consider the actual discovery process. Under the Federal Rules of Evidence governing ESI, a party does not need to produce any data that is neither proportional to the lawsuit (Fed. R. Evid. 26(b)(1)) nor reasonably accessible due to undue burden or cost (Fed. R. Evid. 26(b)(2)(B)). This may seem obvious, but it provides an important limit to data sought in discovery. Given the amount of data produced on or around a person in everyday life, if the request for discovery is not tailored to the case, it will be denied as overly broad.

The health data generated by wearable technology such as Fitbit trackers or Apple Watches provide instructive examples. These wearable devices generate constant real-time tracking of one’s cardiovascular health, as well as GPS tracking of an individual’s location and general activity level. In a recent case (Bartis v. Biomet, Inc., No. 4:13-CV-00657-JAR, 2021 WL 2092785, at *1 (E.D. Mo. May 24, 2021)), an attorney defending a product liability claim against an artificial hip manufacturer requested a plaintiff’s Fitbit data to rebut the plaintiff’s claims of pain and reduced mobility. The plaintiff objected, but the court decided the plaintiff’s activity levels were relevant and that there was an extremely low burden to producing it. However, in another case (Spoljaric v. Savarese, 66 Misc. 3d 1220 (N.Y. Sup. Ct. 2020)), a different court denied a defendant’s request for weight loss data, finding it to be a speculative “fishing expedition.” If an attorney plans to request data from a wearable device, the request must be narrowly tailored to the relevant time and relevant data; courts seem unwilling to make people hand over all their health data.

Authentication of Your New Technology

So long as ESI complies with the aforementioned rules of relevance, privilege, and discoverability, it can be authenticated by traditional means. Authentication under the Federal Rules of Evidence means the proponent of a piece of evidence must produce evidence supporting a finding that the evidence is what the proponent claims it to be (Fed. R. Evid. 901). The burden of proof for authentication is not high: One must prove by a reasonable likelihood that the evidence is authentic. Text messages by SMS or some other proprietary app can be authenticated by the sworn testimony of someone in that conversation. That being said, if any member of the chat is not testifying in court, any fact that person asserts can be barred by hearsay unless covered by an exception. The same situation could apply to a recording of a videoconference meeting via Zoom, Microsoft Teams, or Cisco Webex. So long as the content of the meeting is not barred by another rule and there is someone able to testify that the meeting happened the way that the proponent claims, it should meet the authentication requirement.

One could also authenticate a recording of oneself made by a smart home device such as Amazon Echo or Google Nest. These automated digital assistants interface by using verbal commands, so they record whoever activates them with certain “wake words” (e.g., “Alexa” or “Hey, Google”). And if the assistant connects to other smart devices, such as automated thermostats or home security systems, these home recorders could be crucial to proving or disproving that someone was in a particular location at a particular time.

Evidence generated from these systems can lead to several complicating factors regarding discoverability and access. In one high-profile criminal action (State v. Verrill, 293 A.3d 178 (N.H. 2022)), prosecutors issued a search warrant for a murder suspect’s Amazon Echo device, seeking evidence to disprove the suspect’s version of events. These devices always listen and passively record our homes, where we have a reasonable expectation of privacy, so the production of this data could present problems with Fourth Amendment protections against unreasonable search and seizure. On the other hand, one could foresee a situation where a suspect has no other alibi than the data from such smart home devices and would ask Amazon to produce it as part of his or her defense.

ESI must still be authenticated. Federal Rule of Evidence 902(14) states that some data is self-authenticating when a qualified person has certified that the data proffered is identical to the original by use of a hash value. Hash values are very important to the discovery and production of ESI, and all practitioners dealing with ESI should have some familiarity with them. A hash value is a sequence of numbers generated by an algorithm that produces a kind of unique digital signature. Using hash values helps avoid the time and expense of calling a records custodian to authenticate stored data.

Is It Testimony? (Who or What Is Conveying the Information?)

Considering who conveys the information in a new piece of technology can determine which rules the court will apply. Some machine data acts as a conduit for the assertions of others, such as posts on social media or a business’s ESI generated by employees. However, conveyance becomes a much more complicated issue when a machine process is almost entirely responsible for evidence. When first demonstrated, photography essentially allowed for the capture and preservation of a moment in time. But who conveys the information the photograph shows? All the photographer does is point the camera in a direction and decide when to capture. The content of a photograph depends on the inner workings and settings of the camera. If a photograph provides an assertion that is offered to prove the matter asserted, then who is the declarant?

In the earliest days of photography, courts would only admit photos if the photographer testified about the entire process and the accuracy of the photograph, but this became increasingly difficult the more ubiquitous photographs became. Courts had similar issues with X-ray images (no one could testify that X-rays showed what the person’s bones actually looked like) and later with motion pictures and video. Yes, a person can testify as to the internal process and that they aimed the camera and pushed the button, but we still have the problem of conveyance.

Courts have solved this issue by tying these artificially generated still and moving images to accompanying testimony as demonstrative evidence. Someone, often not even the person responsible for “taking” the image, must testify that it’s an accurate representation of what it purports to be, even in the case of automated surveillance systems where no one pushed the record button. As demonstrative evidence, the image or video does not convey information but supplements the information conveyed by a person.

DNA evidence is a lot like a photograph. A camera creates a photograph when a person activates a mechanical process. DNA lab technicians rely on highly advanced software and algorithms to analyze the samples collected and make final determinations on matches. The lab technician does not and cannot visually analyze chromosomes; they simply determine whether a sample is adequate for the machine’s analysis. This is not to minimize the role of DNA lab technicians, only to highlight that the technician is providing no expertise or judgment in the ultimate conclusion. It is the DNA software, not a human, that conveys all the information.

The lack of certainty on conveyance may seem trivial until one considers the Confrontation Clause in the Sixth Amendment. The Supreme Court found any DNA evidence inadmissible if there is no lab technician to testify and be subject to cross-examination, despite the fact that the technician is not the source of the reliability for the DNA evidence. On its own, the output from a piece of software cannot be cross-examined. This problem can extend to any conclusion produced by software or a specialized computer-driven analysis of data. How can one validate the reliability of a machine process one does not know?

Should the programmers or developers of the software be called to testify as to the reliability of the process? If so, we reach the opposite problem of persons who intimately know the processes but nothing about the facts of the case. It is the same as calling a photographer to testify to the reliability of photography in the 1800s. These same machine conveyance issues can arise with evidence produced by blood-alcohol content readers, fMRI and other medical imaging devices, mass spectrometers, and any other machine or instrument with a specialized, complex internal process. This is not the same as expert witnesses using a specialized tool to assist them in making their observations and judgments. In that case, the data came from a machine, but a human who can be cross-examined is responsible for the “output.”

An Uncharted Territory for Evidence Law

In the future, the issue of machine conveyance will only become more common as more complicated machine processes are invented and generate more data without human involvement. Without cross-examination, how can anyone adequately test for reliability and accuracy? While existing rules of evidence cover a wide range of facts and circumstances, the advancement of artificial intelligence presents uncharted territory for evidence law. However, current rules provide us with a reliable road map to the ultimate goals of evidence. It must be reliable, it must be relevant, and ultimately, we must be able to explain why both of these things are true. While the law always needs to catch up to technology, the foundation we have in current rules provides us with a head start.

    Author