Teenage girls at Westfield High School in New Jersey were appalled to discover that the boys in their class were sharing nude photos of them, not just because of the violation and perceived betrayal from boys they considered their friends and peers, but because the photos were not real. Using an online tool that employed artificial intelligence (AI), some of the teenage boys had “undressed” real photos of the girls and then spread these AI-generated photos among their classmates. The girls quickly reported it, and though school administrators promised that the photos had been deleted and were no longer being circulated. However the girls and their parents harbor fears that the doctored photos will resurface and cause further problems for them in their future professional or academic lives. One of the victims, a fourteen-year-old girl, described how she and the other female students now feel deeply uncomfortable and newly wary of their male classmates. Some even completely deleted their social media accounts out of fear of how their innocuous, everyday photos could be used again in the future. The incident caught the attention of New Jersey State lawmakers, and since then the New Jersey State Senate proposed a bill to address altered and manufactured explicit content, referred to as “deepfake pornography.” The bill, if passed, would prohibit deepfake pornography and impose both civil and criminal penalties on creators of such content. New Jersey is one of the more recent states to attempt to address this issue, but other states will likely follow suit as applications and technology that are capable of this kind of manipulation are made widely available. The proliferation of deepfake technology signals a likely increase in incidents like the one seen at Westfield High School—outrage and concern, and inevitably legislation, will follow.
Humanity has always attempted to reliably discern the real from the fake; and in the digital age, this has only increased in difficulty. In every sector, people are anxiously discussing AI and its equally exciting and terrifying potential. The rise of photoshop led to the need for photographs to be questioned, and deepfake technology should lead to the same level of skepticism when evaluating video footage. It was still quite difficult until recently for the average person to seamlessly alter video with the technology available to the public, but this is no longer the case. With the advent and rapid improvement of deepfake technology, people are beginning to recognize that they must question the validity of video footage, as it may be altered or even wholly manufactured.
Deepfake videos are creeping into almost every sector in some way, but the majority of them on the internet today are pornographic. The targeted individuals are almost always women and are featured in explicit contexts they never actually appeared in. Understandably, many of the featured individuals are horrified when they discover such content and suffer heavy psychological effects. U.S. law does not currently address the creation or distribution of deepfake videos. The majority of states do not yet have deepfake statutes either, with some discrepancy among those that do about the extent of the protections offered. Stories like that of Westfield High School are increasing; and in response, Congress and numerous state legislatures have recently introduced legislation addressing deepfakes.
This Comment argues that in addition to state and federal bills criminalizing the creation and dissemination of deepfake pornography, victims of deepfake pornography should be able to make claims based on a violation of their right of publicity. The porn industry is presumed to be a multibillion-dollar industry, with the actual revenue varying widely depending on the source and year. Deepfake pornography is increasing within the porn industry because there is a market for it—a very lucrative market. The legal system should recognize victims’ personal and economic interests in the usage of their likenesses in explicit materials. This would also address the economic incentives of the creation of pornographic material. For the legal system to adequately protect all potential victims, a federal right of publicity needs to be broad and include an inherent financial interest in each person’s likeness when it has a commercial use. This will ensure that victims can receive damages and that creators and disseminators are further incentivized to be more cautious about ensuring the content they circulate is consensual. While some states have passed legislation addressing the creation of deepfake videos, the internet has transcended the boundaries of state regulation and thus a federal statute should address this issue as well. Social media has given people the power to leverage their likeness for commercial gain in a new way and people are finding unique methods for monetizing their likenesses every day. Enacting a long-called for federal right of publicity would benefit many people in this internet and image-dominated age, such as by giving victims of deepfake pornography greater potential for successful legal recourse for the abuse of their likenesses.
I. Introduction to Deepfakes
Deepfake technology refers to the usage of machine learning on photo and video content, and its use to replace the features of people in the original content with someone else’s. While deepfake videos are usually detectably fake to those who know how to recognize them, they are very convincing at first glance, especially to inexperienced viewers of such content. Deepfake videos include convincing changes in facial features, skin tone, build, height, aging or de-aging, and even the sound of someone’s voice. Some deepfake videos are largely innocuous humorous parodies, such as a video in which former President Barack Obama’s face and voice has been imposed over popular rapper Ice Spice’s Genius interview, or imposing your friend’s face over an actor in their favorite movie scene. Others have potentially problematic political implications. For example, videos of former President Donald J. Trump and current President Joe Biden saying things they never actually said recently circulated and caused concern among those dedicated to preventing the effects of misinformation on our elections. Such applications of AI video altering likely have First Amendment protections, and some deepfake content creation could have many useful and artistic purposes. Therefore, any potential regulation will need to be done in a manner consistent with such protections.
However, useful or artistic deepfakes tend to be in the minority and the vast majority of deepfake content available on the internet today is pornographic, with more being created every day. In 2019, ninety-six percent of deepfake videos were pornographic, and it was estimated in late 2023 that by the end of the year, the number of deepfake porn videos created and uploaded would be more than every other year combined.
Deepfake technology has been tied to pornography since its inception. The very term deepfake first arose in 2017, after a Reddit user with the handle deepfakes began creating videos where he edited the faces of popular female celebrities into porn. The Reddit user “deepfakes” used deep learning AI-assisted networks to compile Google images, videos, and stock photos as well as pornography, and eventually, the program was able to merge the celebrity’s face into pornographic videos convincingly. This process originally took several days when using a standard central processing unit, but since 2017 it has become faster, easier, and more widely accessible. It is also no longer necessary to compile quite as much data to enmesh someone else’s characteristics. The app DisCo claims to only need one photo to make a video of you flawlessly doing a viral TikTok dance to a specific song, though the app does say it may access dance videos you have previously uploaded. The more nefarious potential applications of such technology that can turn photos into movement are not difficult to imagine. Indeed, apps have been launched for the explicit purpose of allowing users to “undress” photos that originally featured clothed women—a fact that many should find deeply concerning given the widespread use of social media.
Pornographic deepfakes overwhelmingly feature women and girls in and creators of such material generally feature high-profile figures or people they know. While some deepfake video content is deliberately misleading, deepfake pornography is often labeled as such, which means people are most likely aware it is altered content. However, those inserted into the material often feel as though they have been violated in comparable ways to those who have pornographic material legitimately depicting them shared without their consent, even when they and others know it is not truly them. Those nonconsensually featured in deepfake videos, both pornographic and otherwise, may have a variety of reasons to wish that the content be taken down. Such victims may feel that they are entitled to legal remedies, but lack sufficient means to pursue those goals.
II. Current Legal Remedies
As it currently stands, many victims nonconsensually portrayed in deepfake pornographic content have no real way to seek justice through the court system. There is no directly applicable federal law, though legislators are imminently seeking to remedy that. Legislative developments are also needed in almost every state to create adequate legal recourse. Most states still need to develop laws that address nonconsensual deepfake pornography creation. Those states that do have laws addressing deepfakes should amend or extend them to provide better protections for victims, especially in the absence of a federal law. It would also be best if a federal statute addressing deepfakes, and specifically deepfake pornography, was created to allow for a federal avenue for victims to sue. Criminalization of the creation or dissemination of deepfake pornography at both levels is a necessary avenue to give all victims the right to sue in their choice of either state or federal court. Widespread, standardized legal responses would assist in controlling this area and discouraging the exploding deepfake pornography industry. However, these statutes may not be enough to give victims a sense of justice or adequately curb deepfake creation. Videos and pictures are nearly impossible to completely remove from the internet, meaning the unchecked posting of such content will likely grow more impactful and dangerous as AI improves and the real is harder to separate from the fake. Victims, who are overwhelmingly women and girls, can experience negative social, economic, and psychological impacts long after a video that does not star them, but appears to, is first posted on the internet.
The general consensus is that this technology has developed rapidly, changing at speeds much faster than our lawmaking process has kept up with, and that lawmakers are now scrambling to determine the best response. At this time, the victims seeking legal recourse can be split into two categories: famous public figures such as celebrities and private individuals without a large public presence. Most state right of publicity laws already allow public figures to sue over an unauthorized use of their likenesses, and they have used them to pursue such legal actions. Public figures are likely more able to make successful right of publicity claims, even in states with narrower statutes, because there are tried and true state laws protecting the economic potential of their likenesses. Meanwhile, private individuals in the many states with more narrow right of publicity laws and without a law on deepfake pornography on the books may struggle to find an opportunity to successfully sue for damages over the shared videos.
III. Current Legal Responses to Deepfakes
Celebrities, particularly female ones, were the first-known subjects of deepfake pornography, with early deepfake videos featuring Gal Gadot and Scarlett Johansson appearing on the internet in 2017. The amount of deepfake pornography available exponentially increases every day, and the majority of it continues to feature female celebrities. There are now websites entirely devoted to deepfake pornography, and celebrities such as Taylor Swift and teenaged Xochitl Gomez have both recently discovered AI-generated or manipulated images of themselves. Xochitl in particular expressed confusion and concern over how difficult it had been for her parents and team to try and get such images removed from the internet, especially since she is a minor.
Public figures and celebrities can more easily sue over the use of their likenesses in deepfake videos that advertise a product or use their likenesses for economic gain, but the same avenues that would allow those suits might not provide a claim against the use of their likeness in deepfake pornography. Public figures can sometimes pursue legal recourse through the Lanham Act, a federal trademark act that protects names, images, or likenesses from “false endorsements.” Lanham Act claims are fairly narrow as they require an implication of sponsorship or a likelihood of confusion. In contrast, most state right of publicity laws prohibit only the commercial use of names, images, or likenesses. Some states with more narrow statutory or common law protection require that the individual’s likeness already be used commercially. Celebrities and public figures are more likely to be protected by a more narrow definition, and they are more likely to seek recourse under such statutes to protect their likenesses and ensure they recoup their value.
The proliferation of social media has redefined the terms celebrity and endorsement, creating new roles such as influencers and internet microcelebrities. These are individuals who may not be household names but are well known within their niche, and can make a substantial amount of money through posting content aimed at their specific audience. While the Lanham Act would likely protect these people from AI content using their likenesses to promote or endorse a product or company, the level of protection will vary based on state and court interpretation of whether there is an intrinsic value in the holder’s likeness.
In the age of the internet, there are more people than ever before leveraging their likenesses for economic gain. Influencers often use their social media following to strike promotional deals with brands and to monetize their content, which usually revolves around their names, images, or likenesses. In contrast, an internet microcelebrity may be well known within a certain niche but can have less of a monetary goal. Often people fall into this popularity accidentally. Once established, many companies will gladly use an internet microcelebrity to endorse their products. Use of such individuals is likely increasing because they cost less and will generally have more sway with their audience, as gauged by engagement and conversion rates. This social media culture could make having a commercially valuable identity more widespread than ever before. Social media culture encourages monetizing the average person’s likeness, and the engagement of even a relatively small amount of people can merit compensation from a brand attempting to break into a specific market. State courts could choose to broadly apply relevant right of publicity statutes, but generally prove loathe to extend legislation to cover situations that the legislation was not originally written to protect. The current result is legal confusion, inconsistency depending on state application, and people who feel as though injustices have been committed against them that have no legal recourse.
Celebrities have already begun to sue over unauthorized use of their likenesses under the theory that it abused their right of publicity, and especially under California’s expansive right of publicity law, they will likely find success. Scarlett Johansson’s attorney planned to initiate legal action against Company Lisa AI, an AI app that used AI altered or generated footage of the actress to promote their app. The use of her likeness to promote a product means it could also potentially violate Lanham Act protections, but it remains unclear which legal theories her lawsuit will rely on. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) has put out a statement addressing their intent to work with lawmakers to pass legislation that protects their union members from the unauthorized development and dissemination of AI-generated or altered images. The fact that public figures have mechanisms through which they may address the use of their likenesses in such content speaks to potentially valuable protections provided by the right of publicity, particularly in the State of California. A prior contestant on the popular reality show Big Brother has initiated a lawsuit in the State of California against a Ukrainian app called “Reface,” saying he hopes to make it a class action by including other high-profile California residents. It is worth noting that unlike individuals in other states, California residents should not need to prove that their likenesses are already profitable or that they are celebrities, which can increase a plaintiff’s likelihood of success when bringing a claim.
The same ability to sue under right of publicity laws afforded public figures is not always extended to private citizens. State right of publicity laws do not clearly cover private citizens in many states or in the District of Columbia. Further, many states currently do not have any kind of penalty on the books for deepfake pornography, and among those that do the penalties and protection levels vary widely. Victims in such states may need to search for other ways to pursue legal recourse, with no guarantee that they would have any, and may need to sue under other statutes. One of the Westfield High School victims has initiated a federal case for damages, but the case is being brought under federal statutes that allow remedies for revenge porn and child pornography.
A. Inadequacy of Revenge Porn Laws in Addressing Deepfake Porn
Revenge porn is a type of image-based sexual abuse in which intimate photos of the victim are nonconsensually shared with others, often by being posted on the internet. There is no specific federal criminal revenge porn law; however, in 2022, Congress created a federal civil cause of action for revenge porn victims. The lack of a specific crime means the level of protection and recourse victims may claim is dependent on the state they reside in. All states have either a specific revenge porn law or another statute under which this crime can be prosecuted on the books, with some variation among them about how exactly they handle it. Most states criminalize it at some level, but many do not create a private right to sue for damages, and when they do they limit the allowed damages to emotional distress and actual damages. The emotional distress and potentially negative social harms experienced by victims of deepfake porn are comparable to that of victims of revenge porn, and because of such similarities one might assume victims in deepfake porn cases could bring an action under revenge porn laws. However, when victims of deepfake porn seek legal recourse, they find that they are often frustratingly left in the cold by their states’ revenge porn statutes. The would-be plaintiffs run into several issues that make it clear the deepfake content they have been edited into is not covered by their states’ revenge porn statutes.
First, most states require an intent to harass, annoy, alarm, or intimidate the depicted person. This actually can bar some victims of traditional revenge porn from bringing a claim, as it does not account for other motives such as personal enjoyment or heightened social status. It is likely that many deepfake porn victims would also fail to meet this standard, as explicit deepfake content could be created just for consumer interest and pleasure with no further ulterior motive. This is not the case for every state. For example, South Dakota’s statute includes an intent to self-gratify, Kentucky’s statute uniquely includes an intent to profit, and New York and North Carolina’s statutes include an intent to harm the victim’s financial welfare. Without such intent extensions as seen in these few exceptional states, many of which would still be insufficient, it is unlikely that the creators of deepfake pornography would possess the requisite intent to be found liable under revenge porn statutes.
Second, most state revenge porn laws require that intimate areas depicted are those of the non-consenting person. In deepfake porn where the victim does not themselves appear nude or commit sexual acts, they are not included in the language of the statute because their bodies are not the ones that appear in the content. While some state statutes are written more broadly in that they ban dissemination of “sensitive,” “indecent,” or “intimate” images, it is unlikely that courts will broadly interpret the statutes to apply to individuals where the statutorily addressed explicit portions (nudity of intimate areas or sexual activity) in the content at issue are depictions of another individual, or AI-generated. Current revenge porn statutes have already been criticized because many victims were not able to bring successful claims, and they are even less likely to apply to victims of deepfake porn. Subsequently, further legislation is required to ensure all victims may seek justice.
B. Current State Laws on Deepfake Pornography
The amount of legislation addressing deepfake videos has skyrocketed recently as more publicized cases arise. Publicity around deepfake content has been increasing owing to both the impending election in which many fear deepfake videos could be used to improperly influence electors as well as the increase in stories about the harms suffered by victims of nonconsensual deepfake pornography. The beginning of 2024 saw the introduction of hundreds of bills related to AI, with nearly half of them seeking to address deepfakes. This is an important step towards giving victims legal recourse and beginning to combat the ease and nonchalance with which people have been creating and sharing this nonconsensual content. As of early 2024, at least eleven states have passed laws directly applying to digitally altered pornographic material. There is some variation between the states about the extent of the protection they offer, especially as some of them require an “illicit motive,” which may be difficult for a plaintiff to prove. The states that were early to adopt legislation on this issue have decided to combat unauthorized creation and dissemination through criminal penalties and the creation of private rights to sue.
In fall 2023, New York Governor Kathy Hochul signed into law a bill that criminalized the dissemination of nonconsensual pornographic images created using AI. The bill punishes violators with up to a year in jail and opens up a private right of action for victims. California enacted two statutes targeting the creation of deepfake videos in 2019. One of these statutes, which targeted the use of deepfakes in political campaigns, sunset in January 2023. The other directly targets the creation and intentional disclosure of explicit content and opens a private right to sue over content that the depicted individual did not consent to.
By early 2024, all states except Alabama and Wyoming had bills addressing AI under consideration. The bills proposed in other states largely fall along the same lines in both language and penalties. For example, the Arizona State House of Representatives introduced a bill in January 2024 creating a right of action against “digital impersonation” of Arizona residents whose impersonations are shown nude or engaging in sexual acts, allowing “everyday people” to sue for damages and political candidates to sue for declaratory relief. Other states such as Utah and Massachusetts are taking steps to ensure that their revenge porn or deepfake statutes explicitly include AI-altered and AI-generated explicit photos and videos. Over the next few years, legislation will likely continue to develop as AI does, but early legislation will likely set the tone for which AI companies and content dominate the AI space.
IV. Reconciling Regulation With The First Amendment
The right to publicity concerns speech and thus must be reconciled with First Amendment free speech protections. The regulation of pornography and thus deepfake pornography is limited by free speech rights, but the area has not been left wholly without regulation. Revenge porn laws exist in nearly all fifty states, and those that do not have other statutes under which a person may be prosecuted for the distribution of revenge porn. Sexually explicit materials involving minors are specifically prohibited federally and carry harsher penalties than any violations only involving adults.
Deepfakes inarguably constitute speech, and First Amendment protections certainly do and should apply to deepfake content and will have a bearing on what is possible for further regulation of the area. In United States v. Alvarez, the U.S. Supreme Court ultimately held that false speech has First Amendment protections. The Court that decided so did carve out certain appropriately narrow prohibitions in regard to false speech that may bring about serious harm, such as impersonation of government officials. This exception gives more leeway to legislators considering avenues to regulate deepfakes of politicians as they fear the implications on the United States’ democratic process.
The First Amendment treatment of some deepfake pornography may fall under the far less protected category of “obscenity.” The U.S. Supreme Court set forth a test for what constitutes obscenity in Miller v. California, holding that material was obscene when it (1) “appeals to [] prurient interest[s]” based on “contemporary community standards,” (2) depicts “sexual conduct in a patently offensive way,” and (3) “lacks serious literary, artistic, political, or scientific value.” Material must satisfy this three-prong test to be considered obscene and fall outside First Amendment protections. The Miller test is in many ways unfortunately vague, and when applied to content being circulated on the internet, the relevant question may be as follows: Whose community standard should be applied? The Supreme Court has not fully addressed this question, though it struck down obscenity regulations in the Communications Decency Act (CDA) as overbroad and one of the factors it cited as an issue was that the “‘community standards’ criterion” meant that internet content would then be judged by whatever community would likely be offended.
When crafting civil and criminal penalties, legislators should proceed with caution, remaining aware that any criminal or civil penalties they create must be narrowly construed to avoid infringing on First Amendment rights. The right of publicity governs mediums of expression such as photos and videos as well as writing or verbal speech, and it must be acknowledged that sometimes, especially in the case of public figures, the usage of a likeness is selected for a particular creative expression or message. Individuals’ interests in their identities must then be balanced against others’ constitutionally protected free speech rights. The current understanding of this balance is murky at best, and courts have disagreed over First Amendment protections of the use of likenesses, likely because of the variation in state treatment of the right of publicity.
The creation of a federal right of publicity paired with legislation creating a private right of action for people nonconsensually featured in pornographic material should not conflict with First Amendment protections because pornography may be considered a commodity. The Supreme Court has held that commercial speech may be regulated to further extents than most other forms of speech. Commercial use of an image on merchandise or in replication has been considered unprotected speech, and many courts have held that under copyright law, there is no First Amendment Protection without some transformative or expressive element to the use of a name or likeness.
Protecting the economic interests of an individual whose likeness is inserted into explicit content does not necessarily conflict with First Amendment concerns, but instead is necessary to promote continued free expression. The right of publicity protects a person’s rights to commercial value, dignity, control, and a right to performance. This protection ensures avoidance of unjust enrichment and allows the preservation of an individual’s commercial value, which may be damaged by a perceived appearance in explicit content. From a commercial and personal standpoint, allowing First Amendment interests to wholly displace any right of publicity would be inconsistent with our recognition of peoples’ freedom to enjoy and protect their economic potential, and our desire to avoid unjust enrichment. Pornography is in many ways a recognized and regulated commercial enterprise in the same way other industries are, which means it makes sense to be able to require compensation when peoples’ likenesses add value to the product just as it would in other industries or advertisements.
V. The Right of Privacy
Deeply intertwined with the right of publicity is the right of privacy, which could be considered the other side of the same coin. A constitutional right to privacy in some form has been recognized since Griswold v. Connecticut in 1965. Though U.S. jurisprudence around this right has continued to shift, the conviction that the U.S. citizen has some right to the governance of their life path and choices has remained. There is also a federal right of privacy on the books in the Privacy Rights of 1974 that codified some common law privacy rights and further expanded protections for the privacy of individuals regarding the information collected and maintained by the government.
The extent to which the right of privacy is recognized by courts and explicitly protected by legislatures has varied throughout time but has endured as a concept behind the federal creation of some laws, including some that have to do with the right of publicity. For example, the right of publicity has statutorily been extended to an individual’s control over the accumulation and use of personal information by other entities. The rapidly accelerating use of AI poses privacy concerns, as programs such as ChatGPT can access content across the internet including social media posts and any information users input into the conversational format of many AIs.
AI programs that create images such as DALL-E and Midjourney learn what to create through acquiring information and examples pulled off the internet often without the owner’s or creator’s permission. AI programs learn and improve using any information and creation on which they are directed to train, including personal information and copyrighted works. Creation of deepfake content that realistically depicts someone requires information about a person that can come from videos, pictures, and voice recordings, all of which the AI may have acquired without explicit permission. There are substantial privacy concerns that come with AI, especially when it is used to create content that infringes on a person’s personal life and depicts that person in the most intimate possible manner. Regulation of AI, and deepfake porn in particular, is consistent with the sentiments behind a right to privacy and could potentially be the biggest real threat to privacy we have seen yet. Many states without an explicit statute recognize a violation of the right of publicity through a privacy-based misappropriation tort, further demonstrating the connection between these two concepts. Privacy and publicity are connected, as one’s ability to leverage one’s likeness and online presence for economic purposes can disappear when privacy has been wholly violated leaving little of the individual’s real self to meaningfully claim.
VI. The Right of Publicity
The need for a federal right of publicity statute has been recognized for over a decade. In the digital age, almost all content is accessible nationwide, and continuing to allow the right of publicity inconsistencies between state law protections presents challenges for content creators and plaintiffs alike. In Haelan Laboratories, Inc. v. Topps Chewing Gum, Inc., the Second Circuit transformed the right of publicity into an economically focused form, currently used by most states. Before Haelan Laboratories, the right of publicity was based on the sentiments behind a right of privacy and aimed to protect individuals from suffering indignities and abuses through unsanctioned use of their image. Haelan Laboratories determined that a famous baseball player’s likeness was an assignable right, that once committed exclusively to a company for advertisements, could not be given to a rival company as well. Following this decision, the right of publicity began to be thought of as a property right.
The need for a federal right of publicity has only increased with the advent of AI. As AI technology has grown, Congress has begun to seriously consider the passage of a federal right of privacy. A recent Congressional Research Service report discussed a federal right of publicity as a response to the proliferation of AI. The report also addresses the current circuit splits around right of publicity in state law, and how such laws have come in conflict with Section 301 of the Copyright Act. Because Section 301 establishes federal preemption of state copyright laws, some courts have held that exclusive rights granted under the Copyright Act supersede any right of publicity claims, while other courts have found that the claims are not preempted. Amending Section 301 and creating a federal right of publicity could overcome these inconsistencies and grant clarity to courts nationwide.
Both economic and noneconomic factors are behind the creation of a right of publicity law, and the rationales behind the law involve establishing protections for victims of deepfake pornography. Noneconomic goals stemming from privacy law, such as averting emotional harms, alongside economic goals like preventing unjust enrichment and protecting a person’s economically viable resources, could meet and use the right of publicity to bring claims against those who make and distribute deepfake porn. A focus on the property adjacent treatment of the right of publicity has led many states to limit the right of publicity to protect only those who have already monetized their identities or likenesses. A more holistic and historically supported approach to the right of publicity would expand that to find value in all likenesses and give people control over their likenesses to account for the right to privacy concerns as well, as these concepts bolster each other.
A. The Variations in State Right of Publicity Laws
There are substantial state variations on the right of publicity statutes, but they all generally share certain characteristics and protect certain individuals. California’s right of publicity codes protect the use of likenesses for both those who already profit off their identities and likenesses, as well as those who do not in some courts. On the other hand, Utah courts have held that identity must have an “intrinsic value,” which bars most private individuals from having a claim. Though interpretations vary, courts have generally found intrinsic or commercial value when plaintiffs were already making money off their likenesses, or when defendants profited monetarily from the infringing use of plaintiffs’ likenesses. There is also variation from state to state over whether their law protects the persona, or gives a post-mortem right, with some of them being unclear because there is a lack of litigation and legislation. The other states all fall somewhere on the spectrum of protection, but many have a more stringent level of protection or have been interpreted in such a way as to limit protection. Many other states are unclear on who they protect and to what extent, and have not had every element litigated to give the court a chance to create a common law rule.
In Fraley v. Facebook Inc., California residents were able to successfully bring a class action claim in United States District Court against Facebook for using their photos, names, and the assertion that they “liked” another advertiser to endorse the advertiser to their Facebook “friends.” The plaintiffs were successful in asserting a theory of economic harm sufficient to withstand a motion to dismiss, because they were not compensated for the use of their likeness in Facebook’s advertisements, even though those advertisements were allegedly worth two to three times more than traditional advertising. This case would not have been successful in many states because the limited extent of their right of publicity protections. Indeed, the court pointed out that “California courts have clearly held that ‘the statutory right of publicity exists for celebrity and non-celebrity plaintiffs alike.’” When the likenesses of average Americans have economic value to companies that use them, protecting just those people who already use their likenesses commercially leaves open a subset of people whose valuable likenesses can be used with no permission and no repercussions.
B. Proposed Federal Legislation Addressing Deepfakes
As AI continues to develop and deepfakes increasingly feature high-profile individuals and political candidates, Congress has begun to address the need for legislation in this space, and it is possible that a federal bill addressing deepfakes will be passed in the next year. While newly introduced bills have aimed to create a private right to sue for actual damages, it is also important that such laws explicitly address deepfake pornography. Addressing this issue directly in legislation will make it clearer to plaintiffs that there is a mechanism to sue and establish definitions of newly developing terms, assisting the courts in their analysis. The creation of a private right to sue for actual damages can then be used in conjunction with the right of publicity so victims can bring a stronger claim.
Several members of Congress came together in a bipartisan effort to release a discussion draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (No Fakes Act) in late 2023, which aimed to protect actors and singers from AI deepfake creation of their voices and likenesses. The No Fakes Act was widely criticized for a variety of potential inadequacies, with those looking for a general right of publicity law disappointed in the limited AI focus of the law, and others cautioning that this bill may not provide sufficient protections. The Act’s sponsors also claimed it was going to attempt to hold platforms liable for hosting unauthorized digital replicas, which may not be possible given the provisions of the Communications Decency Act may preclude such liability.
In September 2023, the “DEEPFAKES Accountability Act” was introduced in the House of Representatives. The bill specifically targets deepfake creation, and is intended to prevent confusion or deliberate misleading arising from deepfake audio and video content—particularly when the content involves political candidates but also has provisions addressing the creation of pornographic materials. The DEEPFAKES Accountability Act does not attempt to stop the creation of such materials but rather requires a disclosure identifying the content as altered or created by AI or similar technology. It imposes criminal penalties on those who do not include the disclosure, remove the disclosure, or otherwise alter the disclosure, within the delineated intent requirements. If it features the person engaging in sexual acts or nude, criminal penalties may be imposed if the failure to disclose or alteration of disclosure was done “with the intent to humiliate or otherwise harass the person falsely exhibited.” The Act would impose civil penalties against any person who violates the disclosure requirements, as well as create a private right of action for any person who “has been exhibited as engaging in falsified material activity in an advanced technological false personation record” to sue for both damages and injunctive relief. It allows for both actual damages and statutory damages, with the greatest level of damages, at $150,000 per record, specifically allotted for people depicted in visually explicit sexual content “intended to humiliate or otherwise harass the person falsely depicted.”
The DEEPFAKES Accountability Act would be a substantial step in the right direction toward trying to impose workable restrictions around AI-generated and deepfake content. The requirement of labeling the content as fake would help ensure that the public is aware of what is real and what is manufactured, no matter the substance and purpose of the content. However, the Act as written likely still leaves many victims of deepfake pornography with limited recourse given the cap on statutory damages.
Regarding the imposition of criminal penalties, it is worth noting that there should be an intent element to the legislation. In this bill, the language is very similar to the text of state laws that create penalties for the distribution of revenge porn. Such statutes would ensure that people targeted for the same purposes as revenge porn would be given at least some recourse. However, some deepfake pornography is not created to harass or humiliate the people it depicts, but rather to serve the interests or pleasure of consumers. Victims of such deepfake porn made with the aim of pleasure or economic gain should still be able to bring claims. The intent requirement is attached to the provision for the highest level of statutory damages. Anyone bringing a claim for content created and distributed without the requisite intent, but which led the person or entity being depicted to experience “a perceptible individual harm or face a tangible risk of experiencing such harm,” could recover lesser amounts of statutory damages or actual damages. Actual damages could be hard to ascertain depending on the situation, and many victims would find it difficult or impossible to prove. If an expansive enough federal right of publicity was also created, this could give more weight to a claim for actual damages, because under a widely applied right of publicity such as the one found in California, actual damages may be easier to prove. It may also be wise to include an explicit provision for criminal and civil penalties when the falsely depicted subject is a minor, without any mention of intent.
Following the creation of AI-generated explicit images of Taylor Swift and their quick and widespread dissemination primarily on X (formerly Twitter), outraged fans and X users called for legislation regulating such images to protect not only Taylor but other victims of explicit deepfake material. Legislators quickly responded and proposed the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the “DEFIANCE Act.” The bill would offer a way for victims of forged explicit materials to pursue civil and criminal penalties against individuals who created or possessed the forged materials with the intent to distribute them. The bipartisan bill would also extend penalties to those who receive the material knowing that it was a nonconsensually created forgery. The DEFIANCE Act is likely vulnerable to the same criticisms and challenges faced by previously proposed bills. However, the increasing amount of concern among Americans and their lawmakers around AI use and protection of internet users suggests the eventual creation of at least some regulation of deepfakes.
Conclusion
Enactment of a federal right of publicity and legislation that creates a private right of action for victims of deepfake pornography to sue would strengthen such victims’ claims and would better reflect the desire to protect privacy and publicity rights in the age of the internet. The unchecked proliferation of deepfake content poses problems for society at large, but pornographic content is a particularly sensitive area because of the emotional and practical fallout that can result from it. As legislators are responding to the need to regulate deepfake content, they should make the issue of explicit deepfake content a higher priority—because the pornographic nature of the majority of deepfakes makes it a sensitive topic that concerns many of their constituents. Criminal and civil penalties are necessary and will provide much-needed recourse for victims, but more impactful channels for civil claims will better dissuade both creation and dissemination. Enacting a broadly constructed federal right of publicity statute would give victims of deepfake content another avenue to sue and provide one with substantial potential monetary benefits. Increasing potential financial penalties would assist in giving the victims a sense of justice, and the increased liability would discourage potential creators and disseminators. There should be an acknowledgement that explicit deepfake content is created not merely for artistic purposes or personal pleasure, but because it is profitable. Further, a broad federal right of publicity is needed to homogenize the availability of claims and circumvent the confusion that comes with attempting to navigate state law variance. A broader right of publicity is also necessary to better protect the right to privacy and is necessary to adequately respond to the way likenesses are used in the age of the internet, where the delineation of a commercially viable identity is murkier than ever.