chevron-down Created with Sketch Beta.

Litigation News

Summer 2023, Vol. 48, No. 4

How Real Are Deepfakes?

John McNichols

Summary

  • Technology is expanding to both create and expose these video deceptions.
  • Deepfakes are computer-generated video files that appear to show a subject person saying or doing particular things that he or she did not do. 
  • Deepfakes differ from earlier generations of manipulated photo and video files in that they are enabled by a form of AI known as “machine learning.”
How Real Are Deepfakes?
Yana Iskayeva via Getty Images

Jump to:

Grand Moff Tarkin, the Death Star commander who destroyed the peaceful planet Alderaan in 1977’s Star Wars, played perhaps the most intimidating villain of the movie, setting aside Darth Vader himself. Fans unsurprisingly called for his return in the later prequels depicting pre–Death Star events, and a full 39 years after his debut, the Grand Moff returned to the franchise in 2016’s Rogue One. For the sake of franchise continuity, he was portrayed by the same actor, the accomplished British thespian Peter Cushing. The strange thing, though, was that Cushing had died in 1994, 22 years before Rogue One’s release, and thus, all of his “acting” in the prequel was computer-generated.

A year later, the technology used to resurrect Cushing received a new name when a Reddit user gained popularity by creating videos that falsely appeared to show mainstream celebrities acting in pornographic scenes. His handle: “deepfakes.” Unlike the producers of Rogue One, who obtained consent from Cushing’s family to use his likeness, “deepfakes” made no pretense of consent, exposing the technology’s potential for nefarious uses. Since then, deepfake imagery has become increasingly difficult to detect, and its potential for abuse has extended well beyond celebrity parody.

In March 2022, for example, a deepfake circulating on social media appeared to show Ukrainian President Volodymyr Zelensky telling Ukrainian soldiers to surrender. The desire to curb such potential abuses without inhibiting further technological development has led to an array of legal reactions at both federal and state levels, as well as calls to update the Federal Rules of Evidence.

What Is a Deepfake?

Deepfakes are computer-generated video files that appear to show a subject person saying or doing particular things that he or she did not do. Although the subject of the video may be entirely synthetic—i.e., a made-up person—the more well-known use of deepfakes is the depiction of real persons in unreal events and situations. Deepfakes differ from earlier generations of manipulated photo and video files in that they are enabled by a form of artificial intelligence (AI) known as “machine learning.”

Using existing video of a particular subject as training data—say, clips of Tom Cruise from Mission: Impossible—a computer can be taught to recognize the movements of the subject’s face and then replicate those movements in an artificial video file that appears to show the subject speaking. The extraordinary realism of deepfakes stems from the use of a second computer that, armed with genuine video footage of the same subject, compares the artificial file against the genuine item attempting to identify any flaws that reveal its counterfeit nature. Over thousands or even millions of iterative refinements, these two “adversarial networks” ultimately generate a faked file that cannot be distinguished from the genuine item.

Although deepfakes were recently the exclusive domain of computer scientists, the increased availability of artificial intelligence software has brought the ability to create them within the reach of ordinary consumers. As early as 2018, the commercial software program FakeApp allowed users to create and share videos with their faces swapped. Newer products have vastly expanded functionality such that deepfakes can now be created from imagery that does not reflect depth, making it possible for the source material—i.e., the original video file of the subject that the user seeks to manipulate into a deepfake—to be captured on a mere cell phone camera. And as for audio deepfakes, AI software is now capable of regenerating human voices after just seconds of listening time as training data.

What Are Deepfakes Used For?

Given their obvious potential for parody, deepfakes have received airtime in multiple genres of entertainment. The audience of America’s Got Talent delighted in June 2022 when a deepfake of Simon Cowell appeared to show the celebrity judge singing alongside the program’s contestant. But by far the most common “entertainment” use for deepfakes is pornography. An October 2019 report by a Dutch cybersecurity company estimated that adult-themed videos accounted for more than 90 percent of all deepfakes online.

But advocates of the technology argue that their aesthetic potential has positive uses as well. In something of a reversal of Rogue One’s use of Cushing’s image—i.e., giving an old (dead) actor new lines—some have contemplated using them to “update” old scenes by changing the race or gender of characters in order to retroactively insert elements of diversity and, thereby, align older movies with current social norms.

Outside of the entertainment industry, deepfakes have found a prolific role to play in blackmail and abusive-communication tactics. The Congressional Research Service specifically warned of this potential use in its June 2022 report, noting the potential of deepfakes as a means to obtain leverage over (among other persons) officials with access to classified information.

Where politicians and world leaders are concerned, deepfakes’ potential for abuse goes well beyond blackmail. As the Congressional Research Service noted, the ability to falsely depict a public figure making inappropriate or incendiary statements could affect public discourse and even sway an election. To warn against this very possibility, the actor Jordan Peele created a deepfake in April 2018 that appears to show former President Barack Obama using profanity to describe then president Donald Trump.

Perhaps even more nefariously, the availability of deepfake technology also gives audiences a reason to disbelieve genuine portrayals as mere “fake news.” Political candidates who have been accurately videotaped saying unflattering things can now plausibly (but falsely) claim that they have been deepfaked, a phenomenon that Professors Danielle Citron and Robert Chesney have dubbed the “liar’s dividend.”

What Have the Reactions Been to Deepfakes?

The U.S. federal government has been actively working to address deepfakes’ increasing prevalence and influence. In 2018, the Defense Advanced Research Projects Agency, the U.S. Defense Department’s research arm, launched a competition to develop automated tools to detect deepfakes and prevent their spreading through social media networks. The following year, the U.S. House Intelligence Committee held hearings on deepfakes to explore their potential use to manipulate elections.

Despite these initiatives and growing concerns, congressional leaders have failed to introduce comprehensive legislative measures on deepfakes, much less enact them. In 2018, Senator Ben Sasse of Nebraska introduced the Malicious Deep Fake Prohibition Act, which would have created a new federal criminal offense for the creation of fake electronic media in furtherance of other illegal conduct. Critics countered, however, that because the law applied only to conduct intended to further existing crimes, it would not extend the law’s reach to cover any new activity, and the act was not passed into law. Proposals in the U.S. House of Representatives, such as the 2019 DEEPFAKES Accountability Act, have similarly failed, with critics offering essentially the opposite critique—that a broad definition of “covered media” would potentially extend the law’s reach to creative activity protected by the First Amendment.

Legislative measures have been introduced at the state level as well. New York, Texas, and Virginia have all introduced bills that would punish deepfakes used to effectuate fraudulent schemes. California, however, has gone the furthest, with Governor Gavin Newsom signing bills in October 2019 that provide persons who are the subject of sexually explicit deepfake content with a private right of action against the deepfake’s creator.

Within the legal system, meanwhile, the possibility of deepfake-enabled fraudulent evidence has not yet provoked a judicial reaction, although commentators have suggested that reforms are necessary. Under Federal Rule of Evidence 901, a party seeking to introduce an item of evidence need only make a minimal showing of authenticity, on the theory that opposing parties will challenge phony evidence and that juries are capable of assessing such challenges. Critics counter, however, that where deepfakes are concerned, courts can no longer have confidence in the perceptive powers of lay jurors and, hence, a heightened standard of authentication is appropriate where evidence takes the form of electronic media.

Resources

    Author