chevron-down Created with Sketch Beta.
November 07, 2022 Feature

Deepfakes and Their (Un)intended Consequences

Sandra Ristovska

The global explosion of video—thanks to smartphones, social media platforms, and messaging apps—has helped propel questions about policing, domestic abuse, and human rights to the forefront of public debate. Video’s informative value is indeed familiar by now, but there is another side worthy of attention, as illustrated by a recent news event. In March 2022, a video showing Ukrainian President Volodymyr Zelensky asking his troops to surrender to Russia’s invasion appeared on Facebook, YouTube, Telegram, and the Russian social network VKontakte. It turned out to be a deepfake, which is an umbrella term for manipulated or fabricated audio and visual content through machine learning and AI techniques. The artificial video of Zelensky appeared within days after the Ukraine government’s Center for Strategic Communication issued a public warning about the possibility of such deepfakes circulating online. Zelensky responded with his own video debunking what he called a childish propaganda, while some social media companies acted swiftly to remove the deepfake in cases where it violated terms-of-service agreements.1

This highly publicized case exemplifies the rapid diffusion of deepfake technologies that are now used by various actors to create synthetic videos for a wide array of both creative and harmful purposes like entertainment, satire, sexual exploitation, and political manipulation. Unlike the Ukraine example, though, most deepfakes are not met with prompt and successful debunking efforts. Sometimes even the authenticity of a suspicious video cannot be easily verified or discredited as fake. To give just one example, the military government in Myanmar used what appeared to be a confession video of a prominent imprisoned politician to accuse former leader Aung San Suu Kyi of corruption in 2021. Though many in the country believed that the video was a deepfake, media forensic experts and detection algorithms have been unable to establish the authenticity of the low-quality footage with a reasonable degree of certainty. On the other hand, human rights experts familiar with Myanmar raised the possibility of a real recording of a staged and coerced confession.2

Because the technologies used to create artificial videos are becoming increasingly more sophisticated, more user-friendly, and easier to access, deepfakes have introduced a new level of anxiety to prevailing policy concerns about how and under which circumstances it is necessary to address viral deception and manipulation. In the U.S., for example, Senator Rob Portman’s Deepfake Task Force Act is set to develop “a plan to reduce the proliferation of digital content forgeries.”3 As part of the broader challenges with online information’s credibility, deepfakes are commonly perceived as the latest storm threatening the information landscape, impairing the ability to distinguish meaningfully between truth and falsehood. The weaponization of visual technologies, though, is both long-standing and wide-ranging. In 1950, for example, Senator Millard Tydings challenged Senator Joseph McCarthy’s allegations that numerous communists were working in the U.S. State Department. In response, McCarthy’s staff created a photograph, a composite of two distinct images, showing Tydings seemingly chatting with Earl Browder, then head of the American Communist Party. By some accounts, the manipulated photograph cost Tydings his reelection.4

Underpinning any discussion about altered or invented images is an enduring concern with the power of visual persuasion. People trust their vision more than other senses.5 Moreover, vision has a prioritized role in the anatomy and function of the brain, resulting in a quicker processing of images compared to words. Images trigger emotions, including the brain’s fear and memory center, faster than words. Images are therefore more cognitively and emotionally arousing than other tools of communication. The ease with which the brain processes them can heighten the belief that the content is true.6 It is not surprising, then, that the initial public responses to any new visual technology have been a repeated performance of fear about the status of visual evidence ever since the arrival of photography in the nineteenth century.7 At the same time, the rapid diffusion of deepfake technologies coupled with the magnifying power of social media and messaging apps sheds a new light on these old concerns, which are now complicated by the speed, global nature, anonymity, and other distinct features of the digital environment in which images circulate.

Deepfakes are an evolution and exacerbation of existing problems. The questions of remedies include cultural, technological, and legal considerations, of which a brief overview is provided below.

Cultural Considerations

“Prepare, don’t panic” is the motto of the human rights organization WITNESS that is working on proactive policy initiatives and best practices to address the emerging and potential malicious usage of deepfakes.8 Key here is the understanding that a public obsession with deepfakes risks contributing to cultures of speculation where anything and everything can be dismissed as fake. An instructive example comes from Gabon, a country in Central Africa. To address public rumors in late 2018 that the president, Ali Bongo, was ill or dead, the government announced that he had been recovering from a stroke. Soon after, it released a video of Bongo delivering his customary New Year’s address. Yet this video, believed to be a deepfake, not only amplified the rumors but was also used as a justification for what turned out to be an unsuccessful military coup.

The Verge reports that the “fear of deepfakes seems to have outpaced the technology itself.”9 The 2018 case from Gabon illustrates this point well. The sheer possibility of deepfakes was enough to sow doubt, with potentially detrimental political consequences. Around the world, government officials have also called inconvenient videos deepfakes as excuses for digital naivety and poor security10 or as plausible deniability to avoid accountability for things that are indeed true. The latter is what legal scholars Bobby Chesney and Danielle Citron famously called the “liar’s dividend,” which increases as the public becomes more educated about the dangers of deepfakes and thus more skeptical of audio and visual media in general.11 In this context, broader calls for visual media literacy programs in schools, senior centers, community groups, and places of worship may be an important intervention for a better public understanding of images writ large.12

The prevailing cultural assumption that seeing is believing has long led to binary renditions of images as authentic or forged, transparent or opaque. This logic of naïve realism—that is, the sense that images can provide an objective or unmediated access to reality—shapes how people see visual media. Yet seeing is a complex process that involves not only what the eyes physically see, but the experiences and ideas that the viewer brings to the image.13 Deepfakes only make it clear that visual literacy training on the cognitive and cultural influences of how people perceive and interpret images may be long overdue. Far from being a bulletproof solution, visual literacy programs could model for the public how to ask questions about images that move beyond the logic of naïve realism.

Technological Considerations

Questions about how to mitigate the harmful effects of deepfakes also involve technological considerations. The development of efficient detection software may seem like an obvious answer, but current technical capacities limit the accuracy and applicability of deepfake video detection, which experts call a game of cat and mouse.14 As a result, parallel efforts for shared standards and tools for content verification can also be important. One such example is the Coalition for Content Provenance and Authenticity (C2PA), industry-wide collaborations led by Adobe, Arm, BBC, Intel, Microsoft, and Truepic. In January 2022, C2PA proposed the first global technical standards for better tracking the authenticity and provenance of digital media. The standards were developed to facilitate design and adoption of tools that can identify the source and history of online content across digital platforms.15

Human rights organizations like WITNESS have joined C2PA, warning that technical solutions need to account for critical human rights principles and practices.16 The considerations involve a balancing act. For example, online anonymity may be an obstacle when seeking to hold a deepfake creator of a nonconsensual sex video (intended to inflict emotional harm) criminally liable. Yet privacy protections can be critical for those who risk their lives to record and upload footage with intact metadata from a war and conflict zone like Ukraine. Additionally, ticks and checks on social media may signal authentic content, not a deepfake, but they cannot account for decontextualized and misconstrued videos, which are far more common. An otherwise authentic video from one location often can be found circulating on social media with a hashtag or captions suggesting a different location, thus misleading viewers. Equally important is the understanding that a video’s origin and history may not say much about its factual accuracy. Hence, technical standards and software, although necessary, offer just one set of tools for addressing some of the harms that deepfakes can cause to individuals and society.

Legal Considerations

An outright ban on deepfakes would certainly infringe upon free speech protections. In the U.S., the Supreme Court ruled that the First Amendment prohibits the government from regulating speech simply because it is a lie in United States v. Alvarez in 2012.17 It is worth underscoring too that digital manipulation is not inherently harmful. It can be a mode of creative expression when used, for example, for political satire. A more appropriate set of considerations therefore involves questions about whether the creators of harmful deepfakes may be subject to civil or criminal liability. In the U.S., Bobby Chesney and Danielle Citron discuss intellectual property and tort law as potentially relevant for suing the creators of deepfakes while noting that questions of attribution, the global nature of online platforms, and the costs associated with civil lawsuits could be obstacles for plaintiffs. Criminal liability may be sought under existing statutes like those concerning fraud, cyberstalking, impersonation, and defamation in specific circumstances. Regulatory agencies like the FTC, FCC, and FEC may also have a limited role in advancing public policy goals to minimize the effects of malicious deepfake creation and circulation.18

Forensic scientists, on the other hand, have raised the issue of deepfakes complicating the role of visual media as evidence in court. Agnes E. Venema and Zeno J. Geradts, for example, discuss the possibility of a deepfake defense (claiming that an authentic video is fake), which shifts the burden of proof. Proving a negative beyond reasonable doubt, however, may be legally impossible. The second scenario is when a deepfake, believed to be authentic, is admitted as evidence. Venema and Geradts thus suggest educating judges and juries on how to assess deepfakes. Such efforts, though, may be limited in scope without addressing the underlying issue about unified guidance and applications for treating video as evidence.

In the U.S., for example, video appears in 80% of criminal cases,19 but U.S. courts, from state and federal all the way to the Supreme Court, lack clear rules and standards on how video can be used and presented as evidence. The underlying pervasive assumption is that video evidence need not be governed by unified standards because seeing is believing. In other words, the logic of naïve realism prevails in court just like in society. As a result, judges, lawyers, and jurors treat video in highly varied ways that can lead to inconsistent and unsafe renderings of justice. Without unified science-based guidance to facilitate better evaluation of video as evidence in court, visual media, not just deepfakes, will only grow as a challenge to the pursuit of equal and fair justice.

Moving Forward

At times when video accounts for 82% of all consumer Internet traffic around the world,20 examining its unintended consequences is imperative. Deepfakes exemplify this necessity. On the one hand, they highlight enduring issues about visual manipulation and persuasion. On the other hand, the distinct nature of the digital environment in which deepfakes are produced, circulated, and used makes it possible for manipulated or fabricated videos to have far-reaching consequences for individual privacy, democratic elections, policy debates, national security, and legal proceedings among others. A cautious approach that accounts for the cultural, technological, and legal dimensions of the challenges exacerbated by deepfake technologies offers one way of moving forward. At the heart of this approach is the recognition that a critical understanding of visual perception and interpretation across law and policy domains is essential for justice and human rights. Otherwise, the costs of naïve realism may be too high in the digital age where video and its related technologies of deception continue to proliferate around the world.

Endnotes

1. Tom Simonite, A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be, WIRED (Mar. 17, 2022), https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook.

2. Sam Gregory, The World Needs Deepfake Experts to Stem This Chaos, WIRED (June 24, 2021), https://www.wired.com/story/opinion-the-world-needs-deepfake-experts-to-stem-this-chaos.

3. S. 2559, Deepfake Task Force Act, https://www.congress.gov/bill/117th-congress/senate-bill/2559 (last visited June 10, 2022).

4. William J. Mitchell, The Reconfigured Eye: Visual Truth in the Post-Photographic Era (The MIT Press 1992).

5. Emily Balcetis, Clearer, Closer, Better: How Successful People See the World (Ballantine Books 2020).

6. Yael Granot, Emily Balcetis, Neal Feigenson & Tom Tyler, In the Eyes of the Law: Perception versus Reality in the Appraisals of Video Evidence, 24 Psych., Pub. Pol’y & Law 93 (2018).

7. Id.; Jennifer L. Mnookin, The Image of Truth: Photographic Evidence and the Power of Analogy, 10 Yale J. Law & Humanities 1 (1998).

8. Prepare, Don’t Panic: Synthetic Media and Deepfakes, lab.witness.org, https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ (last visited June 10, 2022).

9. James Vincent, “Deepfake” That Supposedly Fooled European Politicians Was Just a Look-Alike, Say Pranksters, The Verge (Apr. 30, 2021), https://www.theverge.com/2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus.

10. Id.

11. Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753 (2019).

12. Mary Angela Bock, Visual Media Literacy and Ethics: Images as Affordances in the Digital Public Sphere, First Monday (forthcoming).

13. Sandra Ristovska, From Rodney King to George Floyd, How Video Evidence Can Be Differently Interpreted in Courts, The Conversation (May 10, 2021), https://theconversation.com/from-rodney-king-to-george-floyd-how-video-evidence-can-be-differently-interpreted-in-courts-159794.

14. Catherine Bernaciak & Dominic Ross, How Easy Is It to Make and Detect a Deepfake, Software Eng’g Inst. Blog, https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake (last visited June 10, 2022).

15. C2PA Releases Specifications of World’s First Industry Standard for Content Provenance, c2pa.org, https://c2pa.org/post/release_1_pr/ (last visited June 10, 2022).

16. Sam Gregory, To Battle Deepfakes, Our Technologies Must Track Their Transformations, The Hill (June 7, 2022), https://thehill.com/opinion/technology/3513054-to-battle-deepfakes-our-technologies-must-lead-us-to-the-truth.

17. 567 U.S. 709 (2012).

18. Chesney & Citron, supra note 11, at 1792–1808; see also Agnes E. Venema & Zeno J. Geradts, Digital Forensics, Deepfakes, and the Legal Process, 14 SciTech Law., no. 4, Summer 2020, at 14–17, 23.

19. Bureau of Just. Assistance, U.S. Dep’t of Just., Video Evidence: A Primer for Prosecutors (Oct. 2016), https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/final-video-evidence-primer-for-prosecutors.pdf.

20. CISCO Annual Internet Report (2018–2023) White Paper (Mar. 9, 2020), https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

Sandra Ristovska

-

Sandra Ristovska is an assistant professor of Media Studies at the College of Media, Communication, and Information at the University of Colorado Boulder. As part of her 2021 Mellon/ACLS Scholars & Society Fellowship, she was a resident researcher with the Scientific Evidence Committee of ABA’s Science and Technology Law Section.