chevron-down Created with Sketch Beta.

Landslide®

Landslide® Article Archives

From Deepfakes to Deepfame: The Complexities of the Right of Publicity in an AI World

Eliana A Torres

Summary

  • The right of publicity, recognized through state statutes or case law, allows individuals to protect their name, image, likeness, or other identifiable attributes (NIL) from being used without their permission.
  • The right of publicity can protect individuals from deepfakes and limit the postmortem use of their NIL as digital versions.
  • Applying the right of publicity to AI involves balancing protecting individuals’ rights and the potential benefits of AI use, as well as freedom of speech and artistic expression.
From Deepfakes to Deepfame: The Complexities of the Right of Publicity in an AI World
HUIZENG HU via Getty Images

Jump to:

As technology continues to advance at an unprecedented rate, it’s no surprise that the interest in artificial intelligence (AI) has skyrocketed in the past year. One fascinating trend that has emerged is the desire to create AI-powered versions of ourselves, alter the appearance of other individuals with AI, use AI-produced headshots, and resurrect celebrities and public figures as deepfakes—clones that are almost identical to real people, including their voices. The existence and growing popularity of such AI personas raises complex legal and ethical questions about the intersection of this technology and personal identity, where the right of publicity has emerged as a crucial legal concept in protecting individuals. However, the right of publicity, which traditionally protects individuals from unauthorized commercial exploitation of their name, image, or likeness, becomes murky when applied to AI and raises difficult questions for legal practitioners. So, does the right of publicity apply to AI creations? If so, how does it apply?

AI Personas and Deepfakes: Digital Doppelgängers

Advances in AI are being seen across all forms of media, from Hollywood to government and social media influencers. For example, Hollywood’s latest trend involves using deepfakes of both living and deceased actors in their films. These virtual clones are so realistic that they are almost indistinguishable from the actual person. For instance, Virgin Voyages released a commercial with “Jen AI,” the AI clone of Jennifer Lopez, who licensed her image for the creation of her AI twin.

The rise of virtual influencers is another fascinating example. These AI-generated personalities have amassed millions of followers on multiple social media platforms and are able to promote products and influence consumer behavior without ever existing in real life. The implications of this phenomenon are far-reaching, raising questions about identity authenticity and consumer protection. One such virtual influencer, Miquela, has managed to garner millions of followers on her social media platforms. Her popularity has even led to sponsorship deals, partnership agreements, and modeling opportunities. The emergence of such virtual influencers has certainly changed the landscape of the industry and has opened new avenues for brands and marketers to explore.

While some view deepfakes as harmless entertainment, others worry that they could be used to spread misinformation or to harm individuals, as the technology gives anyone the power to create convincing videos of people doing or saying things that never actually happened. As more people gain access to this technology with the ability to accurately depict famous figures, politicians, and average individuals, we are forced to confront our conventional understanding of the right of publicity in order to determine if and how it applies to AI.

A Brief Primer on the Right of Publicity

The right of publicity is a legal concept that allows individuals to protect their name, image, likeness, or other identifiable attributes (referred to as NIL) from being used without their permission. Historically, it is a derivative of the privacy right of appropriation. While there is no federal law in the United States that acknowledges the right of publicity, many states recognize different forms of this right through statutes or case law. Still, in today’s global digital economy, it is challenging to apply right of publicity laws that are only governed at the state level. Despite these challenges, the right of publicity can protect individuals from deepfakes and unauthorized use of their likeness, as well as limit the postmortem use of their NIL as digital versions.

For example, California has a comprehensive set of laws in place to protect an individual’s right of publicity. Under California’s Civil Code section 3344, individuals are entitled to control the commercial use of their name, voice, signature, photograph, or likeness. If this law is violated, damages can be awarded, including actual damages, profits from unauthorized use, and statutory damages. Similarly, New York acknowledges the right of publicity under common law and specific statutes to address unauthorized uses of an individual’s likeness, and in recent years has extended the right postmortem. Florida also has its own versions of right of publicity laws that deal with the unauthorized commercial use of an individual’s name or likeness. On the other hand, there are states that don’t recognize the right of publicity, such as Maryland, which does not have a civil right of publicity statute but recognizes the common law appropriation invasion of privacy tort. Interestingly, Arizona enacted criminal and civil statutes to specifically protect the unauthorized use of the “name, portrait or picture” of soldiers for “commercial purposes.” These are just a few examples of the different variations per state.

Navigating the application of the right of publicity framework can be challenging due to the various state law frameworks, and it is even more complex when considering the use of AI and virtual characters across multiple jurisdictions through online platforms. It’s a delicate balance between protecting individuals’ rights and the potential benefits of AI use, as well as freedom of speech and artistic expression. For these reasons, lawmakers have turned their attention to a potential federal right of publicity. In a recent hearing, the U.S. Senate Judiciary Committee’s Subcommittee on Intellectual Property heard testimony from various AI companies where Adobe representatives promoted a federal right of publicity to protect unauthorized commercial NIL use. Despite the lack of a unified approach to the right of publicity, the next section explores the different applications to AI clones and deepfakes and exposes the most common unresolved questions.

Application of the Right of Publicity to AI

AI and Its Likeness

The right of publicity has long been a subject of debate, particularly with regard to the challenges of determining who it protects and who can claim this right. Unfortunately, the answers to these questions vary significantly from one state to another, making it difficult to establish a uniform standard. This lack of consistency has led to ambiguity, and it is a matter of concern for many individuals seeking to assert their rights of publicity. For instance, certain states limit the right of publicity to celebrities and the exploitation of the commercial value of their likeness, while others allow ordinary individuals to prove the commercial value of their image. The emergence of deepfakes and digital clones has only made matters worse, as they can be generated as fictional personas based on the likeness of regular people created with AI using the data scraped from all our online activities. In the future, we may even witness the emergence of social media accounts for our digital twins, which could potentially become more popular than the real us.

In fact, right of publicity laws and their impact on ordinary individuals were brought to the forefront in an episode of Netflix’s Black Mirror titled “Joan Is Awful.” The episode depicts Joan, an average woman, unknowingly giving her streaming service the right to use her image by agreeing to their terms and conditions without reading them. Even if there had been no terms of use, Joan’s lack of celebrity status would still leave her unprotected. Right of publicity laws often only protect celebrities, leaving noncelebrities like Joan vulnerable. This scenario could be more complicated if an individual grants the use of their image for a specific purpose, such as a film, and their clone is created and then duplicated. This could lead to a “clone of a clone” scenario, which presents even more legal questions. Could an individual assert their rights over a clone of a clone?

The Black Mirror episode serves as a cautionary tale about the importance of reading and understanding the terms and conditions of any agreement, especially when it involves the right of publicity, and particularly for average individuals who lack the same protection as celebrities. It also highlights the need for right of publicity laws that do not discriminate based on an individual’s level of fame or influence.

In California, there is a statutory and common law right of publicity. There, an individual is not required to be a celebrity, but they must prove that they have a commercially valuable identity. Under common law, even the inadvertent appropriation through the use of AI of a celebrity’s name, likeness, voice, signature, identity, or persona could be actionable. In California, the average Joan could potentially have a claim under right of publicity laws (provided no terms of use had been signed).

In comparison, the results would vary slightly in Illinois, as the right of publicity protects from the unauthorized use of an individual’s NIL, but the use must be for a commercial purpose. This means that merely creating a deepfake is not enough to ensure that the individual’s rights are protected. The deepfake must also be used for commercial purposes, such as advertising or marketing.

It is worth noting that in several states, the right of publicity extends to the voice of an individual and to recognizable attributes, such as body parts. Essentially, the use of AI-generated voices without consent could potentially violate this right. In early 2023, the song “Heart on My Sleeve,” featuring the AI-generated voices of Drake and The Weeknd, created by anonymous songwriter Ghostwriter-977, gained widespread attention. This particular case raised various legal concerns, and it brought attention to the increasing use of AI-generated voices in the music industry. However, the use of AI voices is not new to the film and gaming industries. Video game makers have been using entirely synthetic voice-overs, and film distributors have been using voice cloning to translate movies and do reshoots without requiring actors to be present.

The issue of misappropriation of AI voices has been a topic of growing discussion in recent months. Although there have not been any major court cases in this regard, there are some older cases that can shed some light on the matter. One such case is that of Bette Midler, who sued Ford Motor Co. in 1988 for using a replica of her performance of the song “Do You Want to Dance?” The U.S. Supreme Court decided that advertisers may not intentionally imitate the unique style of celebrity singers to sell or advertise their products; otherwise, individuals could sue them for committing misappropriation or theft. In a separate case with a similar result, the U.S. Court of Appeals for the Ninth Circuit ruled in favor of singer Tom Waits, who received millions in damages for the impersonation of his “raspy, gravelly singing voice” in a corn chip radio advertisement. These cases set a precedent for potential legal recourse for AI clones to prevent the unauthorized use of their voices.

However, it is important to note that cases involving voice right of publicity often face copyright preemption defenses. This can bar the right of publicity claim due to the broad preemption provision in the Copyright Act. The issue of copyright preemption in AI-generated works is complicated due to the Copyright Office’s guidelines on the copyrightability of “Works Containing Material Generated by Artificial Intelligence.” With these new guidelines, it is uncertain how much of the material in Ghostwriter-977’s songs would be considered copyrightable. This uncertainty has made it difficult to determine when copyright preemption applies. It’s also worth noting that trademark preemption can occur but is less common. It usually applies when there is a conflict between federal trademark and unfair competition laws and the right of publicity. In most cases, however, the right of publicity works hand in hand with trademark law to ensure that creators are able to protect their likeness and maintain control over its use in all forms.

When right of publicity infringement involves other recognizable attributes, courts have often determined that individuals can claim rights of publicity for their body parts if they are easily recognizable (although this varies by state). One potential area of concern is the use of deepfake technology to recreate an individual’s unique and recognizable body parts. As of yet, there have been no reported cases of AI clones featuring recognizable personal attributes, such as tattoos or other unique features. However, we have seen analogous cases where AI has not been used. The latest case involved a rapper’s unique back tattoo that was featured, without the rapper’s permission, on the cover of Cardi B’s mixtape “Gangsta Bitch Music Vol 1,” and a jury ruled that there was no infringement of the right of publicity. As AI technology advances and becomes more capable of imitating identical physical features, images, voices, and body parts, it’s important to consider how our legal system will adapt to these emerging challenges.

Posthumous Rights: The Digital Resurrection

The power of AI has also expanded to the resurrection of celebrities as deepfakes. For example, we have seen the holographic performance of Tupac Shakur at the Coachella Music Festival in 2012 and the digital recreation of actors like Peter Cushing in Rogue One: A Star Wars Story. The regulation of posthumous publicity rights varies by state, with many states only applying the law to individuals who are considered celebrities or can demonstrate commercial value. However, the rise of AI technology has raised concerns about the exploitation of ordinary deceased individuals. Furthermore, there is a lack of uniformity among states, with some failing to address the issue of post-death expansion of this right. The implications of these inconsistencies are far-reaching, highlighting the need for a more comprehensive approach to regulating posthumous publicity rights.

The most pressing issue is determining which state postmortem right of publicity law applies:

(1) the domicile of the celebrity at the time of death, the approach of the Second Circuit and the Federal District Court for the Central District of California; (2) the situs of the tort, the statutory approach of Nevada and Indiana, and the tack taken by the Federal District Court for New Jersey in Estate of Presley v. Russen; [or] (3) the domicile of the plaintiff, the holding in Prima v. Darden Restaurants, Inc. and Allison v. Vintage Sports Plaques.

Besides inquiries regarding the postmortem right of publicity and its recognition by states, there is also an issue of ownership. When a celebrity is revived through the use of deepfake technology, the question arises as to who is entitled to the profits generated, whether it be the heirs or the estate. This adds another level to the right of publicity, as laws regarding this matter differ greatly between jurisdictions and often fall behind the swift advancement of AI technology.

Automated Creations

Historically, copyright laws have been centered on human creators. Copyright laws were drafted with the intent to protect the intellectual labor of individuals, ensuring they could benefit from their creations by granting them exclusive rights over the distribution and use of their works. The Copyright Office has acted consistently with the policy reasons behind the copyright laws in the cases of Thaler v. Perlmutter and Kristina Kashtanova. In the first case, Dr. Stephen Thaler sought to register an AI-generated artwork titled Entrance to Paradise with the Copyright Office, naming AI as the author. The Copyright Office refused on the grounds that the work “lacked human authorship,” and a lawsuit against the Copyright Office followed. Ultimately, the Copyright Office was granted a cross-motion for summary judgment against Dr. Thaler, which Dr. Thaler is appealing as of October 18, 2023. A similar stance was taken by the Copyright Office in the case of Kristina Kashtanova’s comic book titled Zarya of the Dawn. The Copyright Office granted Kashtanova copyright protection over the comic’s text “as well as the selection, coordination, and arrangement” of its visual elements but not over the images themselves, as they were made with AI and do not qualify as human authorship.

The Copyright Office bolstered its reasoning in the Kashtanova case in a more recent decision rejecting the copyright registration of the work of art created by Jason M. Allen that won first place at a Colorado State Fair Fine Arts Competition. Allen’s work was denied a copyright registration based on the principle that copyright protection requires human authorship and that the work in question contained more than a de minimis amount of material generated by AI, which had to be disclaimed in the application for registration. The decision explained that Allen’s sole contribution was inputting a text prompt that was interpreted and compared to the AI’s training data, resulting in an image that was determined and executed by the AI, not the human user. Notably, the decision acknowledged that the applicant’s modifications of the AI-generated image using Adobe Photoshop and Gigapixel AI may have contained sufficient human authorship to be registered, but the decision did not address that issue because the applicant refused to limit his claim to exclude the AI-generated material.

Current copyright guidance and recent cases do not provide a clear answer to the situations in which AI evolves to create works independently. It is uncertain what the outcome would be if a deepfake were able to create its own works. It is possible that it would be treated similarly to Thaler’s AI author. But what about deepfakes that gain worldwide popularity, like Lu do Magalu, a virtual influencer with more than 14 million followers on Facebook and six million followers on Instagram? If Lu do Magalu were cloned, would she or her creators have the right of publicity if her image was used by someone else for commercial purposes and recognized by consumers?

The primary question is one of ownership. If an AI model creates a work of art or a piece of music, who owns the rights? Is it the developers who designed and trained the AI, given that their technical expertise and resources facilitated the creation? Developers of AI creations may argue that they should own the rights to AI-created works. On the other hand, if an AI is trained on public data or uses input from multiple users, who should own the resulting creations? Navigating this intricate landscape requires a delicate balance. If AI becomes a common creator, redefining copyright laws will be inevitable. The challenge lies in balancing the rights of AI developers, potential human collaborators, and the broader public. These are the kinds of questions we need to resolve.

Current Case Law on the Right of Publicity and AI

There have been few cases challenging AI and publicity rights violations. The first case was filed in 2020 by an Illinois resident and then consolidated as a multijurisdictional class action, claiming that facial recognition technology company Clearview violated the Illinois Biometric Information Privacy Act (BIPA) by failing to obtain informed consent before collecting, storing, using, and profiting from his and other residents’ biometric data. Clearview offered a searchable facial recognition database, which collected biometric data from publicly available online photos. Under California law, the plaintiffs alleged that Clearview violated the Unfair Competition Law, the statutory and common law right of publicity, and the right to privacy under the California Constitution. Under New York law, the plaintiffs alleged that Clearview violated Civil Rights Act section 51 by using their photographs without consent for trade purposes. Although this litigation is ongoing, the right of publicity claims in California and New York have survived motions to dismiss. Nevertheless, the parties are in settlement conversations, and it is not likely that we will see any precedent on the right of publicity issues.

The second case involved a group of artists claiming copyright infringement, unfair competition, and violation of their right of publicity. During a July 19, 2023, motion to dismiss hearing, Judge William Orrick out of the U.S. District Court for the Northern District of California expressed his inclination to dismiss most of the claims without prejudice. When discussing the right of publicity, Judge Orrick noted that its purpose is to prevent misleading endorsements, but using a generative art program to create works in the style of a particular artist does not imply an endorsement from that artist. Judge Orrick specifically stated: “The reason for publicity rights [is] to prevent misleading endorsements of goods and services from people who [are not] actually endorsing something, but no one thinks that if you use a generative art program to create works ‘in the style of Picasso’ . . . that Picasso endorsed it.”

In the latest and currently ongoing litigation, Kyland Young, a contestant on the popular TV show Big Brother, filed a class action complaint in the U.S. District Court for the Central District of California against software developer NeoCortext. Young alleged that NeoCortext’s AI-powered “Reface” application, which allows users to digitally “swap” their faces with celebrities and public figures in photos and videos, constitutes a violation of the common law right of publicity, protected by California’s right of publicity statute. In September 2023, the court denied NeoCortext’s motions to dismiss and to strike Young’s right of publicity claim. The court found that Young had shown a probability of prevailing on the right of publicity claim by alleging that NeoCortext had knowingly used his name and likeness in its products without his consent and to its commercial advantage. The court rejected NeoCortext’s arguments that Young’s claim was preempted by the Copyright Act or barred by the First Amendment, finding that Young’s claim did not fall within the subject matter of copyright or involve a transformative use of his likeness as a matter of law. The court concluded that Young had stated a legally sufficient claim and made prima facie factual showing sufficient to survive NeoCortext’s motions.

These cases have not established any precedent, and without clear decisions or legislative guidance, it is probable that more lawsuits will arise regarding the use of AI creations and the right of publicity.

Conclusion

As technology continues to advance at a rapid pace, the intersection of AI and the right of publicity presents a complex and challenging issue. In order to keep up with these developments, our legal systems must evolve and adapt accordingly. These challenges also provide opportunities for growth and development. In navigating this complex landscape, it’s important to approach AI with an open mind and a willingness to learn as we work to establish the rules and regulations that will shape the future of the right of publicity.

©2024. Published in Landslide, Vol. 16, No. 2, December/January 2024, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.

    Author