When Artificial Intelligence Yields Artificial Evidence, We Get “Deepfakes”
A deepfake is an image or recording convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. The problem for lawyers is that modern AI-generators can manipulate audio and video (some even using facial recognition) to create nearly undetectable deepfakes.
An audio recording of a father making “violent” threats against his wife was submitted in a UK child custody battle. Patrick Ryan of The National News reported the father’s attorneys “were lucky to get the original audio file and be able to study the metadata on the recording” (Patrick Ryan, “Deepfake” Audio Evidence Used in UK Court to Discredit Dubai Dad, The Nat’l (Feb. 8, 2020). Their digital forensics experts discovered the recording was a deepfake. The mother had used software and online tutorials to edit the original phone call and put together a “plausible audio file.” The father’s attorney stated, “It would never occur to most judges that deepfake material could be submitted as evidence” (id.).
A mother in Pennsylvania allegedly used explicit deepfake photos and videos purporting to show her teenage daughter’s cheerleading rivals “naked, drinking and smoking a vape pen” to try to get them kicked off her daughter’s team. Ultimately, the mom was sentenced to three years probation (Katie Katro, Bucks County Mother Gets Probation in Harassment Case Involving Daughter’s Cheerleading Rivals, 6abc.com (June 9, 2022)).
In California, a MySpace image was erroneously admitted as evidence. The court of appeal ruled the prosecutors should not have used the digital image as evidence because “no expert testified that the picture was not a ‘composite’ or ‘faked’ photograph” (People v. Beckley, 185 Cal. App. 4th 509 (Cal. Ct. App. 2010).
The Liar’s Dividend
The “Liar’s Dividend” is what law professors Bobby Chesney and Danielle Citron call the pernicious way deepfakes muddle truth: “Ironically, liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes” (Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753 (2019)).
The dividend flows “perversely” in proportion to success in educating the public about the dangers of deep fakes:
Imagine a situation in which an accusation is supported by genuine video or audio evidence. As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes. Put simply, a skeptical public will be primed to doubt the authenticity of real audio and video evidence (id.).
Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Center for Internet and Society at Stanford Law School, also warns that deepfakes could erode trust in the justice system: “My worry is that juries may be primed to come into the courtroom and be less ready to believe what they see and believe what they hear and will be more susceptible to claims that something is fake or not” (Matt Reynolds, Courts and Lawyers Struggle with Growing Prevalence of Deepfakes, A.B.A. J. (June 9, 2020)). But she points out with the other hand that courts are aware that “there’s always the possibility that somebody is trying to hoodwink them” (id.).
You Shall Know the Truth, and the Truth Will Make You Mad. —Aldous Huxley
Indeed, courts are already hearing deepfake defenses. The Brennan Center for Justice describes a couple of instances: Guy Reffitt, allegedly an anti-government militia member, was charged with bringing a handgun to the January 6, 2021, Capitol riots and assaulting law enforcement officers; Reffitt’s lawyer maintained that the prosecution’s evidence was deepfaked (Josh A. Goldstein & Andrew Lohn, Deepfakes, Elections, and Shrinking the Liar’s Dividend, Brennan Ctr. for Just. (Jan. 23, 2024)).
Tesla lawyers have similarly argued that Elon Musk’s past remarks on the safety of self-driving cars should not be used in court because they, too, could be deepfakes (id.).
Problems for lawyers will worsen as more AI tools are developed and refined and truth continues to mesh with fiction. For example, one online AI tool called Watermark Remover allows users to remove watermarks from images and videos. Take note, intellectual property attorneys. (Pun intended.)
Another tool, Descript, lets one edit transcripts at will, changing text, cloning voices, removing video backgrounds, and taking out filler words like uh or um as well as background noise.
If people come to believe any video could be faked, evidence that is currently used in court, from cell phone and car videos to thousands of doorbell cameras and security systems, won’t be of much use anymore.
Seeing-Is-Believing Is a Blind Spot in Man’s Vision. —R. Buckminster Fuller
So, what do you do when you receive a “smoking gun” video? A deepfake detection system hasn’t yet been created, and AI content detection tools don’t work.
An international team of academics found a dozen AI-detection tools last year were “neither accurate nor reliable”; another team, from the University of Maryland, found the tools would flag work not produced by AI or that they could be circumvented entirely simply by paraphrasing AI-generated text (Lauren Coffey, Professors Cautious of Tools to Detect AI-Generated Writing, Inside Higher Educ. (Feb. 9, 2024)).
OpenAI has admitted that its own AI detector, AI Classifier, failed to reliably distinguish between human-written and AI-written text; it correctly identified 26 percent of AI-written text as “likely AI-written” while incorrectly labeling AI-written text as human-written 9 percent of the time (Jason Nelson, OpenAI Quietly Shuts Down Its AI Detection Tool, Decrypt (July 24, 2023)).
A Photograph Is Usually Looked At—Seldom Looked Into. —Ansel Adams
On the other hand, University at Buffalo computer scientists have developed a tool that identifies deepfake photos by analyzing light reflections in the eyes. It was 94 percent effective with “portrait-like photos” (Melvin Bankhead III, How to Spot Deepfakes? Look at Light Reflection in the Eyes, News Ctr., Univ. of Buffalo (Mar. 10, 2021)). This tool spots tiny deviations in reflected light in the eyes of deepfake images:
When we look at something, the image of what we see is reflected in our eyes. In a real photo or video, the reflections on the eyes would generally appear to be the same shape and color. However, most images generated by artificial intelligence . . . fail to accurately or consistently do this, possibly due to many photos combined to generate the fake image (id.).
“The cornea is almost like a perfect semisphere and is very reflective,” said the study’s lead author, Siwei Lyu. “The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we . . . don’t typically notice when we look at a face” (id.).
The authenticity of evidence could be an issue in any trial, whether or not the evidence is digital. Until better deepfake detection tools become available, remember that the eyes always give it away.