Grand Moff Tarkin, the Death Star commander who destroyed the peaceful planet Alderaan in 1977’s Star Wars, played perhaps the most intimidating villain of the movie, setting aside Darth Vader himself. Fans unsurprisingly called for his return in the later prequels depicting pre–Death Star events, and a full 39 years after his debut, the Grand Moff returned to the franchise in 2016’s Rogue One. For the sake of franchise continuity, he was portrayed by the same actor, the accomplished British thespian Peter Cushing. The strange thing, though, was that Cushing had died in 1994, 22 years before Rogue One’s release, and thus, all of his “acting” in the prequel was computer-generated.
A year later, the technology used to resurrect Cushing received a new name when a Reddit user gained popularity by creating videos that falsely appeared to show mainstream celebrities acting in pornographic scenes. His handle: “deepfakes.” Unlike the producers of Rogue One, who obtained consent from Cushing’s family to use his likeness, “deepfakes” made no pretense of consent, exposing the technology’s potential for nefarious uses. Since then, deepfake imagery has become increasingly difficult to detect, and its potential for abuse has extended well beyond celebrity parody.
In March 2022, for example, a deepfake circulating on social media appeared to show Ukrainian President Volodymyr Zelensky telling Ukrainian soldiers to surrender. The desire to curb such potential abuses without inhibiting further technological development has led to an array of legal reactions at both federal and state levels, as well as calls to update the Federal Rules of Evidence.
What Is a Deepfake?
Deepfakes are computer-generated video files that appear to show a subject person saying or doing particular things that he or she did not do. Although the subject of the video may be entirely synthetic—i.e., a made-up person—the more well-known use of deepfakes is the depiction of real persons in unreal events and situations. Deepfakes differ from earlier generations of manipulated photo and video files in that they are enabled by a form of artificial intelligence (AI) known as “machine learning.”
Using existing video of a particular subject as training data—say, clips of Tom Cruise from Mission: Impossible—a computer can be taught to recognize the movements of the subject’s face and then replicate those movements in an artificial video file that appears to show the subject speaking. The extraordinary realism of deepfakes stems from the use of a second computer that, armed with genuine video footage of the same subject, compares the artificial file against the genuine item attempting to identify any flaws that reveal its counterfeit nature. Over thousands or even millions of iterative refinements, these two “adversarial networks” ultimately generate a faked file that cannot be distinguished from the genuine item.
Although deepfakes were recently the exclusive domain of computer scientists, the increased availability of artificial intelligence software has brought the ability to create them within the reach of ordinary consumers. As early as 2018, the commercial software program FakeApp allowed users to create and share videos with their faces swapped. Newer products have vastly expanded functionality such that deepfakes can now be created from imagery that does not reflect depth, making it possible for the source material—i.e., the original video file of the subject that the user seeks to manipulate into a deepfake—to be captured on a mere cell phone camera. And as for audio deepfakes, AI software is now capable of regenerating human voices after just seconds of listening time as training data.