Deepfake technology can be used for good but can also be abused in ways that pose risks to individuals and the society at large, prompting the need for legal safeguards, according to panelists at the April 8 webinar “The Congressional Series: AI and Deepfakes.”
April 14, 2025 Technology
Legal challenges loom in wrangling deepfakes
The program, sponsored by the ABA Center for Innovation, examined the growing impact of deepfakes — images, videos or audio that have been edited or generated using artificial intelligence or AI-based tools — and how the technology intersects with the legal system.
Gary Corn, a professor and director of the Technology, Law & Security Program at American University’s Washington College of Law, said deepfakes have been called “the 21st century answer to Photoshopping” and are known as “synthetic media.” Their broad spectrum of uses can benefit educational, artistic and communicative purposes, but can also be employed in harmful ways, such as sexual exploitation and revenge pornography, he said.
Additionally, “I can absolutely see the technology used as an accelerant for the creation and spread for foreign malign influence,” which can be detrimental to U.S. national security interests, Corn said.
The Take Down Act, a bipartisan bill sponsored by Republican Sen. Ted Cruz of Texas and Democratic Sen. Amy Klobuchar of Minnesota, addresses the dissemination of non-consensual intimate images, known as NCII, on digital platforms, said Lakshmi Gopal, staff attorney at Amara Legal Center. The bill criminalizes the publication of NCII in interstate commerce and requires websites to take down the images upon notice from the victim.
“It provides a very timely and urgent update to the Communications Act that will empower everyday litigants and could help change some of the most pressing difficulties with the current dangers of social media use,” Gopal said.
However, potential censorship, lack of safeguards against abuse and over-removal of materials by platforms are among the concerns about the bill, she said.
Paul Grimm, a digital evidence expert and former U.S. district judge, agreed the Take Down Act is a commendable first step in the right direction. Prior to 2017, “no one was talking about deepfakes,” he said, but now the number of them has doubled online every six months — and a lot of it is coming from outside of the U.S. Billions of dollars are being invested in the technology, he said.
“We’ve gotten to the point now where deepfake (technology) is so good, it’s so readily available and it can be used and created at a very low entry point. It doesn’t cost very much money. … It’s hard to get these things taken down.”
Grimm added that courts will face challenges in over-breath in definitions of deepfakes and whether the proper notice of criminal liability is present. “Drawing the line is going to be a challenge” so that use of the technology doesn’t infringe on creativity, intellectual expression or First Amendment rights.
Deception is the dividing line between beneficial and harmful uses of deepfake technology, Corn said. “We need to start focusing more attention on the foreign influence piece of it. … A lot of these issues are left to the states to grapple with. They are not generally built with the capability and the capacity to deal with these types of issues. We need to look at how we can better enable and build capacity at state and local levels to deal with these problems.”
Harvey Rishikof, senior counsel for the ABA Standing Committee on Law and National Security, moderated the webinar.