chevron-down Created with Sketch Beta.

ARTICLE

An Update on the State of Play with Generative Artificial Intelligence and Intellectual Property Issues

Francelina Perdomo Klukosky and Matthew D Kohel

Summary

  • This article provides an update on recent legal developments involving the implications to intellectual property rights arising from the use of generative AI.
  • There have been significant developments involving the intellectual property issues relating to the use of generative AI in the last year, such as in Thaler v. Perlmutter and Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence Inc.
  • The landscape of legal claims relating to AI and intellectual property will continue to evolve, much like the underlying technology that is the subject of these lawsuits.
An Update on the State of Play with Generative Artificial Intelligence and Intellectual Property Issues
demaerre via Getty Images

The use of artificial intelligence (AI) to generate creative works and inventions raises interesting legal challenges to the protection of intellectual property. Courts have become the battleground for one individual in particular, Dr. Stephen Thaler, to test whether creative works and inventions generated exclusively by AI can be copyrighted and patented. In denying Dr. Thaler’s efforts to change the paradigm of intellectual property protection, courts have cited the “plain” language of the Copyright Act and the Patent Act. These recent decisions present obstacles to artists and inventors who may use generative AI tools without human involvement to create their works and innovate new technologies.

In addition, lawsuits have been filed against the providers of generative AI systems, including by celebrities such as Sarah Silverman. The plaintiffs’ primary claim in those cases has been that the defendants infringed their intellectual property rights by using their copyrighted materials from the internet to train the AI system. Also, the plaintiffs allege that the output created by the AI system infringes their intellectual property rights because it “add[s] nothing new” to their preexisting works.

This article provides an update on recent legal developments involving the implications to intellectual property rights arising from the use of generative AI.

Court Rulings That Creative Works and Inventions Created Exclusively by AI Cannot Be Copyrighted or Patented

In August 2023, the District of Columbia’s federal district court granted summary judgment in favor of the U.S. Copyright Office (USCO) and found that creative works generated exclusively by AI cannot be copyrighted. In Thaler v. Perlmutter, No. 1:22-cv-01564 (D.D.C. Aug. 18, 2023), the court affirmed guidance issued by the USCO and held that a work titled A Recent Entrance to Paradise was not copyrightable because it “lack[ed] traditional human authorship.”

The Perlmutter decision comes on the heels of the USCO’s handling of the registration of a graphic novel, Zarya of the Dawn, that was generated with the assistance of the AI application Midjourney, and the guidance the USCO issued in March 2023, both of which provide helpful background for the result reached by the district court.

In September 2022, the USCO initially registered Zarya of the Dawn, but then about a month later, it notified the author, Kristina Kashtanova, that it might cancel the registration if substantial human involvement in the creation of the graphic novel could not be demonstrated. On February 21, 2023, the USCO clarified its position and reissued a registration certificate that excluded the graphic material created by Midjourney from copyright protection. In so doing, the USCO reiterated that “works of authorship” are limited to creations by individuals and not machines. Stated differently, the USCO’s position is that there must be “some element of human creativity” for a work to be copyrightable. See Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 957–59 (9th Cir. 1997); Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 346 (1991) (“originality requires independent creation plus a modicum of creativity”). Determining what that level of human involvement is, however, and how to sufficiently document it are issues for authors and artists seeking to protect their works to consider.

Notably, the USCO found that Zarya of the Dawn’s text could be protected by copyright because Ms. Kashtanova advised the USCO that the graphic novel was written without the assistance of AI. Similarly, the USCO determined that the selection and arrangement of the images and text in the graphic novel were protectable as a compilation, based on Ms. Kashtanova’s representation that she alone was responsible for the selection and arrangement of the images.

With regard to the individual images, however, the USCO concluded that they are not protectable original works. Importantly, the USCO concluded that Midjourney—and not Ms. Kashtanova—was the author of the images. The USCO relied on the process by which Midjourney generates images to find that Ms. Kashtanova was not the originator of the content; specifically, that Midjourney created the final images found in Zarya of the Dawn in an unpredictable way, a process of trial and error based on hundreds or thousands of prompts provided by Ms. Kashtanova to Midjourney until the program generated an image that she was satisfied with. Put simply, the USCO decided that Ms. Kashtanova was not the author of the images because she was unable to control and guide Midjourney to reach the final images that were desired.

Also, it is worth noting that the USCO relied on statements made on social media, attributed to Ms. Kashtanova, about her use of Midjourney to create Zarya of the Dawn. Ms. Kashtanova did not disclose in her registration application that she used AI to create any part of the graphic novel, nor did she disclaim any portion of the work. While the USCO ordinarily “does not conduct investigations or make findings of fact to confirm” statements made in an application, the USCO “may take administrative notice of facts or matters that are known by the Office or the general public” to evaluate an application for accuracy or completeness. See Compendium of U.S. Copyright Office Practices § 602.4(C) (3d ed.). Thus, the USCO effectively put authors and artists on notice that it will review information on the internet to assess the level of human involvement in a work’s creation.

Then, in March 2023, the USCO issued guidance in which it noted that the use of generative AI raises questions about (1) the copyrightability of works created by AI; (2) whether works involving human- and AI-generated content are protectable; and (3) what information applicants must disclose to the USCO about the use of such technologies in the creative process.

In addition to reaffirming its position that copyright protection applies only to expressive works created by humans, the USCO explained that it will make case-by-case determinations on the protectability of works involving generative AI contributions. Of particular importance is how an AI application operates and was used to create the final work. The USCO distinguished between situations where AI was used to assist a human author from situations where the AI application would be considered the author—where an individual lacks ultimate creative control over how the AI application responds to prompts and generates content. The USCO explained that not all materials involving the use of AI would be excluded from protection. For example, copyright protection would still apply where AI was used to edit an image. In sum, the key inquiry is the extent to which an individual had creative control over a work’s expression.

Importantly, the USCO explained that applicants have a duty to disclose the use of AI involvement when seeking copyright protection and to provide a brief explanation of the individual’s contributions. The USCO effectively put authors and artists on notice that AI-generated content that is more than de minimis should be explicitly excluded from an application and that previously filed applications should be reviewed to make sure that AI-generated material has been disclosed and the application corrected, if necessary.

The result in Perlmutter is not surprising given the USCO’s position on the copyrightability of Zarya of the Dawn and its March 2023 guidance. That is especially the case because Dr. Thaler acknowledged that A Recent Entrance to Paradise was wholly created by AI and disclaimed any human involvement. The Zarya of the Dawn saga and the Perlmutter decision raise the question of how much human involvement is enough for an author to receive a copyright registration for a work in which AI played some role.

Not surprisingly, the Copyright Act does not answer this question. While the Copyright Act provides that “original works for authorship” may be protected, the statute does not define the term “author.” Federal courts, however, have held that an author must be a human. See Naruto v. Slater, 888 F.3d 418, 420 (9th Cir. 2018) (“we conclude that this monkey—and all animals, since they are not human—lacks standing under the Copyright Act”). In keeping with the Naruto decision, in 2019, the USCO denied Dr. Thaler’s application to register A Recent Entrance to Paradise, which he acknowledged at the outset “was autonomously created by a computer algorithm running on a machine.”

Similarly, Dr. Thaler filed two patent applications that were rejected by the U.S. Patent and Trademark Office because they named an AI system called Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) as the “inventor.” Specifically, the Patent Act defines the term “inventor” to mean “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” 35 U.S.C. § 100(f). The U.S. Court of Appeals for the Federal Circuit ruled against Dr. Thaler and likewise held that inventors must be human beings. The Federal Circuit based its decision on the language of the Patent Act and did not see the need to make “an abstract inquiry into the nature of the invention or the rights, if any, of AI systems.” Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022). See also Diamond v. Chakrabarty, 447 U.S. 303, 309 (1980) (“abstract ideas,” the laws of nature, and physical phenomena are not patentable). Dr. Thaler appealed, and the Supreme Court denied his application for certiorari earlier this year.

Lawsuits Claiming That Generative AI Tools Infringe Intellectual Property and Other Rights

A number of lawsuits have been filed against the providers of generative AI technology for copyright infringement over their use of enormous amounts of data to train their systems, and recent decisions provide insight into the detailed facts a plaintiff will need to allege to state a claim or survive a fair use defense at trial.

In Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence Inc., No. 1:20-cv-613-SB (D. Del. Sept. 25, 2023), Thomson Reuters alleges that the defendant violated its copyrights by using its Westlaw legal research database to train its AI system. Delaware’s federal district court largely denied the parties’ summary judgment motions because it found that “many of the critical facts” were genuinely disputed.

One of the key parts of the opinion is how the court addressed Ross’s fair use defense. The court began that analysis by noting that the purpose and character of the use (i.e., transformative as opposed to commercial use) are two of the most important fair use factors. The court found that Ross’s uses “were undoubtedly commercial” and that one of its goals was to compete with Westlaw. On the character of the use, the court discussed the Supreme Court’s May 18, 2023, decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, 143 S. Ct. 1258 (2023). The district court refused to place too much weight on that opinion, given the Supreme Court’s recognition there that “a use’s transformativeness may outweigh its commercial character” and the finding in Warhol that “both elements point[ed] in the same direction.” Id. at 1280. The district court in Delaware determined that the case before it was more akin to the “technological context” present in Google LLC v. Oracle America, Inc., 141 S. Ct. 1183 (2021), in which the Supreme Court “placed much more weight on transformation than commercialism.” Thomson Reuters, No. 1:20-cv-613-SB, slip op. at 17 (citing Google LLC, 141 S. Ct. at 1204 (“[A] finding that copying was not commercial in nature tips the scales in favor of fair use. But the inverse is not necessarily true, as many common fair uses are indisputably commercial.”)). The parties, of course, provided competing evidence and arguments on whether Ross’s use was transformative, and the court left the first factor, along with a host of factual and legal questions, to the jury to decide.

Next is Andersen et al. v. Stability AI Ltd. et al., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023), a putative class action in which a federal district court in California dismissed all but one of the claims brought by artists against the providers of generative AI technology used to create images. In Andersen, the plaintiffs alleged that the image-generating AI tools of Stability AI, Midjourney, and DeviantArt infringed their copyrights. The generative AI product at issue was Stable Diffusion, which allegedly is maintained and sold by Stability AI. The crux of the plaintiffs’ claims was that Stability AI downloaded or “scraped” billions of copyrighted images from the internet without authorization to train Stable Diffusion and that the content generated by Stable Diffusion improperly competes with the training images. The plaintiffs asserted claims against Midjourney and DeviantArt as well, alleging that their products rely on Stable Diffusion to generate infringing images.

In largely granting the motions to dismiss, the court described the complaint as “defective in numerous respects.” For example, the court was critical of the plaintiffs’ group pleading and held that an amended pleading must contain allegations showing that the defendants “separately violated their copyrights, removed or altered their copyright management information, or violated their rights of publicity.” Specifically, the court explained that the plaintiffs “should identify each defendant by name with respect to conduct they allege each defendant engaged in.”

Only plaintiff Sarah Andersen’s direct copyright infringement claim against Stability AI survived. The direct infringement claims of the other two named plaintiffs, Kelly McKernan and Karla Ortiz, were dismissed with prejudice because they had not registered their images with the USCO.

In addition, the court dismissed the direct infringement claims against DeviantArt and Midjourney because the theories the plaintiffs advanced against them lacked the requisite factual allegations. As to DeviantArt, the court held that the plaintiffs failed to “allege specific plausible facts that DeviantArt played any affirmative role in the scraping and using of [the plaintiffs’] registered works to create the Training Images.” And with regard to Midjourney, the court found that the claims were even more lacking and noted that “Plaintiffs need to clarify their theory against Midjourney—is it based on Midjourney’s use of Stable Diffusion, on Midjourney’s own independent use of Training Images to train the Midjourney product, or both?” The court granted leave to amend, and it will be interesting to see what facts are alleged in a subsequent complaint, which was due to be filed in late November 2023.

Two other cases, Silverman et al. v. OpenAI Inc. et al., No. 3:23-cv-03416 (N.D. Cal. July 7, 2023), and Kadrey et al. v. Meta Platforms, Inc., No. 3:32-cv-03416 (N.D. Cal. Nov. 20, 2023), have been grabbing headings recently due to their interesting facts, high-profile plaintiff, and active docket. In June and July of 2023, author and stand-up comedian Sarah Silverman, along with several other individuals, filed lawsuits against OpenAI, the developer of ChatGPT, and Meta Platforms Inc., alleging direct and vicarious copyright infringement, violations of the Digital Millennium Copyright Act, unfair competition, negligence, and unjust enrichment.

As in Thomson Reuters and Andersen, the lawsuits allege that the defendants violated the plaintiffs’ copyrights by using their literary works as training data. Also, the plaintiffs allege that they did not authorize the defendants to use their copyrighted works in AI-generated output, which the suits describe as infringing derivate works. The suit against OpenAI claims the platform used “shadow libraries” to train its large language model, GPT-3, employing thousands of copyrighted book titles.

In July 2023, the district court held that the authors’ cases against OpenAI, Inc., are related, and a month later, OpenAI filed a motion to dismiss all but the direct copyright infringement claim. In the motion, OpenAI argues that the authors’ claims are “unworkable” and “defective” because the claims incorrectly contend that every ChatGPT output is an infringing derivate work of their books. The authors filed their response brief on September 28, urging the district court to ignore “OpenAI’s meanderings,” as the pleadings provide sufficient facts supporting each element of the claims. The authors’ response brief heavily relies on a summary judgment opinion in Thomson Reuters, discussed above.

On November 20, 2023, District Judge Vince Chhabria granted Meta Platforms’ motion to dismiss and threw out all but one of the claims in the proposed class action brought by Silverman, Christopher Golden, and Richard Kadrey. The court found that “the remaining theories of liability, at least as articulated in the complaint, are not viable,” cited the Andersen case, and concluded that, to prevail on a theory that the LLaMA’s outputs are infringing derivate works, the authors would need to allege and ultimately submit proof that the outputs incorporate a portion of the books. In other words, the authors must show the similarities between the books and the outputs.

At a December 7, 2023, hearing in the Silverman v. OpenAI case, District Judge Araceli Martínez-Olguín stated that she was “moved” by Judge Chhabria’s decision in Kadrey and noted that during oral argument, counsel for the authors described “things that might be viable causes of action but that just weren’t present in the complaint.”

As a result, on December 13, 2023, the authors filed their first consolidated amended complaint, which now includes specific allegations of direct copyright infringement, detailing how, in training the LLaMA outputs, Meta Platforms copied works such as Silverman’s title Bedwetter.

The cases against OpenAI and Meta Platforms are quite active, and new developments continue to emerge.

In a twist on claims involving AI and purported intellectual property and violations of the Racketeer Influenced and Corrupt Organizations (RICO) Act, the plaintiffs in Perry v. Shein Distribution Corp., No. 2:23-cv-5551 (C.D. Cal. July 11, 2023), are a group of independent artists who claim that Shein stole their designs “over and over again, as part of a long and continuous pattern of racketeering,” by using its AI algorithms to create “exact copies” of their works. This is in contrast to Andersen, in which the AI-generated images are not replicas of the images that were used as the AI system’s training data.

Without giving a lot of detail about how Shein’s AI technology is supposed to work, the plaintiffs nevertheless assert that the system is “smart enough to misappropriate the pieces with the greatest commercial potential” and “astonishingly determine[] nascent fashion trends.” The RICO hook is based on allegations that Shein and associated entities have engaged in a corrupt pattern of copyright and trademark infringement.

It is also worth noting that plaintiffs are filing class action lawsuits against providers of AI technology for alleged violations of federal and state privacy statutes and various tort claims. In P.M. et al. v. OpenAI LP et al., No. 3:23-cv-03199 (N.D. Cal. June 28, 2023), class action plaintiffs had claimed that OpenAI and related entities violated federal privacy laws and improperly used their personal information. The lengthy complaint asserted 15 claims, including violations of the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act—federal statutes intended to address privacy, cybercrime, and related issues.

On September 15, 2023, the plaintiffs, without explanation, voluntarily dismissed their complaint without prejudice. The allegations in the case were nevertheless remarkable in that the plaintiffs predicted a dark future for AI, arising in part out of the alleged business practices of OpenAI and others. Among the plaintiffs were individuals who used various social media and AI platforms and alleged that the defendants misappropriated the information submitted to these platforms for their own purposes in a way that went well beyond their reasonable expectations. They asserted not only that the material allegedly appropriated by the defendants may be used to create harmful or illegal content but that it may lead to the “collapse of civilization as we know it.” The users’ entire private lives were purportedly at the mercy of OpenAI and its products. After painting this bleak picture, the 157-page complaint identified 6 separate classes of plaintiffs and a slew of statewide subclasses, and it asserted 15 separate claims. Some were brought under federal and state privacy laws, while others were state law claims based on theft, negligence, unfair business practices, and the like. Interestingly, the complaint refers to the alleged “theft of . . . copyrighted information,” but it did not include a claim for copyright infringement.

Conclusion

There have been significant developments involving the intellectual property issues relating to the use of generative AI in the last year. It will be exciting to see how courts and juries will address the panoply of issues raised by the use of this technology. At the same time, the regulatory landscape is taking shape while these litigations are playing out. Even though the grim picture painted by science fiction stories and some plaintiffs may not come to fruition, one thing is certain—the landscape of legal claims relating to AI and intellectual property will continue to evolve, much like the underlying technology that is the subject of these lawsuits.

    Authors