chevron-down Created with Sketch Beta.

Litigation News

Fall 2024 Vol. 50, No. 1

Court Excludes AI-Enhanced Videos from Trial Evidence

William Howard Newman

Summary

  • AI technology that enlarges and sharpens video is not yet generally accepted.
  • The court refused to admit the video because the technology used to enhance it was not generally accepted in the relevant scientific community. 
Court Excludes AI-Enhanced Videos from Trial Evidence
PIOTR PIATROUSKI via Getty Images

Jump to:

A state court refused to admit generative artificial intelligence–enhanced video as trial evidence. In Washington v. Puloka, a witness in a criminal trial recorded a brief video on a phone. The defendant retained an expert who enhanced the video with an AI editing tool. The Superior Court of Washington for King County refused to admit the enhanced video as evidence because the technology used to enhance it was not generally accepted in the relevant scientific community. Leaders of the ABA Litigation Section agree with the decision but expect courts to accept similar technology in the future.

Defendant Sought to Introduce Enhanced Video

A witness in a murder case recorded a video on an iPhone and streamed it to the social media platform Snapchat. The video was about ten seconds long and, according to the defense, was of low resolution and blurry.

The defendant retained an expert to enhance the video and present a clearer picture at trial. The defense expert was candid about the fact he did not have forensic training and was not a forensic video technician. The court described him as a “self-identified videographer and filmmaker.”

The defense expert used two pieces of software to enhance the video: Topaz Labs AI and Adobe Premier Pro. He explained that Topaz Labs AI added “sharpness, definition, and smoother edges to the video.” He said that the tool used “machine learning,” a process that adds details to a video based on an algorithm that applies lessons derived from the analysis of many other videos. He conceded that he did not know many details about the algorithm, such as what videos it analyzed. He also noted that the software’s algorithm is “opaque and proprietary.”

The government opposed admission of the video. Unlike the defense expert, the prosecution presented a “Certified Forensic Video Analyst, with national and international forensic video analysis credentials.” He testified that the defense’s technology multiplied the number of pixels in the video by approximately sixteen in an effort to create “a smoother, more attractive product.” He also testified that the technology “created false image detail,” such as changing the shape and color of objects in the video.

In addition to discussing the defense’s technology, the state’s expert testified about other image enlargement techniques that forensic video analysis had long used. He named the “nearest neighbor,” “bi-cubic,” and “bi-linear” techniques as examples. He testified that these techniques create videos that other programs can reliably reproduce, but that Topaz AI cannot do so because of the opacity of its process.

The government expert also cited a publication by a scientific working group that suggested that the “nearest neighbor” technique would be the most accurate method to enhance a small object in a video. That same publication warned that the machine learning was less reliable because “it can be challenging to identify what process[es] were applied to the imagery and replicate those steps with accuracy.”

Court Applies Frye Test to Exclude Video

The court began its analysis by citing the standard set forth in Frye v. United States for the admission of evidence using novel scientific techniques. That standard requires the technique to have “achieved general acceptance in the relevant scientific community.” The court cited the Washington State Supreme Court case State v. Riker, to note that the relevant test does not consider whether “the proposed testimony is correct” but instead only whether the relevant scientific community has accepted the proposed technique.

The court set about identifying the relevant scientific community, agreeing with the government witness who defined it as “the forensic video analysis community.” The court rejected the defense expert’s claim that the “video production community” was the relevant scientific community.

Next, the court noted that the forensic video analysis community had not performed a peer review of the Topaz Labs AI tool. The defense expert did not know whether other forensic video analysts used Topaz AI. He was also unable to share whether analysts had evaluated the reliability of AI video enhancement technology. The court also observed that the relevant community could not replicate the Topaz tool’s results. And it lamented that no other court decisions had examined or approved the use of “AI-enhanced videos” at trial and that the defense had not submitted any publications in support of their use. The court acknowledged that “members of the video production community” used machine learning algorithms to enhance videos but found that their use could not support the video’s admission at trial. This was because that community did “not have a formal organization and they do not publish their testing outcomes.” This made it impossible for the court to evaluate the community’s findings concerning the technology.

Accordingly, the court denied the introduction of the enhanced videos. It cited the possibility of confusion at trial, as the enhanced videos could “muddle” eyewitness testimony. It recalled the government’s expert’s testimony that the Topaz AI software “created false image detail” and caused objects in the video to lose their “original shape and color.” And it noted its concern that, if it admitted the enhanced video, it would invite a possible lengthy “trial within a trial” about how the software worked.

The court also noted that the enhanced videos were not necessary because the defense intended to call multiple eyewitnesses and to introduce the original version of the video. It also held that the original source video itself was the best evidence of what the video depicted.

Section Leaders Agree with the Decision, But Anticipate Future Changes

Litigation Section leaders believe the court properly excluded the generative AI video. “It didn’t strike me as very controversial,” opines Rebecca Sha, New Orleans, LA, Co-Chair of the Section’s Diversity, Equity & Inclusion Committee. “The technology was not accepted or reliable in the relevant community,” she notes.

“AI enhancement tools are still very novel, and the court found there were simply not enough peer reviewed studies or general acceptance in the relevant scientific community for the AI tool at issue in the case,” adds Eric Harlan, Towson, MD, Co-Chair of the Section’s Alternative Dispute Resolution Committee. Accordingly, exclusion was appropriate because “you could not reliably analyze the process by which the source video was enhanced,” he observes.

Sha agrees. “I found the state’s citation to the Scientific Working Group on Digital Evidence and that organization’s warning regarding the use of AI enhancement tools in the courtroom context to be an additional fatal blow to any attempt to use the AI-enhanced videos,” she states. “The general public is also quite skeptical of AI technology given the many anecdotal examples of its flaws, like perceived bias and hallucinations,” Sha explains.

An area of concern regarding generative AI among Section leaders is its potential to distort evidence. One “driver behind the court’s decision” was “the fact that these enhancements changed the underlying evidence in significant and unproven ways,” says Karen L. Hart, Dallas, TX, Co-Chair of the Section’s Construction Litigation Committee.

Leaders agree, however, that courts may accept similar technology in the future. “We may get to a different place with AI evidentiary enhancements as this technology evolves,” predicts Hart. Harlan concurs, noting “I would imagine that eventually there will be AI tools that can enhance video reliably enough to be admitted, and then the weight attached to that evidence will be subject to a battle of the experts.” In particular, leaders predict that improved analysis will lead to the admission of AI-enhanced video as evidence. “Acceptance will change given more time and more technological improvements that are tested and reviewed,” concludes Sha.

Resources

    Author