Lawsuits Claiming That Generative AI Tools Infringe Intellectual Property and Other Rights
A number of lawsuits have been filed against the providers of generative AI technology for copyright infringement over their use of enormous amounts of data to train their systems, and recent decisions provide insight into the detailed facts a plaintiff will need to allege to state a claim or survive a fair use defense at trial.
In Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence Inc., No. 1:20-cv-613-SB (D. Del. Sept. 25, 2023), Thomson Reuters alleges that the defendant violated its copyrights by using its Westlaw legal research database to train its AI system. Delaware’s federal district court largely denied the parties’ summary judgment motions because it found that “many of the critical facts” were genuinely disputed.
One of the key parts of the opinion is how the court addressed Ross’s fair use defense. The court began that analysis by noting that the purpose and character of the use (i.e., transformative as opposed to commercial use) are two of the most important fair use factors. The court found that Ross’s uses “were undoubtedly commercial” and that one of its goals was to compete with Westlaw. On the character of the use, the court discussed the Supreme Court’s May 18, 2023, decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, 143 S. Ct. 1258 (2023). The district court refused to place too much weight on that opinion, given the Supreme Court’s recognition there that “a use’s transformativeness may outweigh its commercial character” and the finding in Warhol that “both elements point[ed] in the same direction.” Id. at 1280. The district court in Delaware determined that the case before it was more akin to the “technological context” present in Google LLC v. Oracle America, Inc., 141 S. Ct. 1183 (2021), in which the Supreme Court “placed much more weight on transformation than commercialism.” Thomson Reuters, No. 1:20-cv-613-SB, slip op. at 17 (citing Google LLC, 141 S. Ct. at 1204 (“[A] finding that copying was not commercial in nature tips the scales in favor of fair use. But the inverse is not necessarily true, as many common fair uses are indisputably commercial.”)). The parties, of course, provided competing evidence and arguments on whether Ross’s use was transformative, and the court left the first factor, along with a host of factual and legal questions, to the jury to decide.
Next is Andersen et al. v. Stability AI Ltd. et al., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023), a putative class action in which a federal district court in California dismissed all but one of the claims brought by artists against the providers of generative AI technology used to create images. In Andersen, the plaintiffs alleged that the image-generating AI tools of Stability AI, Midjourney, and DeviantArt infringed their copyrights. The generative AI product at issue was Stable Diffusion, which allegedly is maintained and sold by Stability AI. The crux of the plaintiffs’ claims was that Stability AI downloaded or “scraped” billions of copyrighted images from the internet without authorization to train Stable Diffusion and that the content generated by Stable Diffusion improperly competes with the training images. The plaintiffs asserted claims against Midjourney and DeviantArt as well, alleging that their products rely on Stable Diffusion to generate infringing images.
In largely granting the motions to dismiss, the court described the complaint as “defective in numerous respects.” For example, the court was critical of the plaintiffs’ group pleading and held that an amended pleading must contain allegations showing that the defendants “separately violated their copyrights, removed or altered their copyright management information, or violated their rights of publicity.” Specifically, the court explained that the plaintiffs “should identify each defendant by name with respect to conduct they allege each defendant engaged in.”
Only plaintiff Sarah Andersen’s direct copyright infringement claim against Stability AI survived. The direct infringement claims of the other two named plaintiffs, Kelly McKernan and Karla Ortiz, were dismissed with prejudice because they had not registered their images with the USCO.
In addition, the court dismissed the direct infringement claims against DeviantArt and Midjourney because the theories the plaintiffs advanced against them lacked the requisite factual allegations. As to DeviantArt, the court held that the plaintiffs failed to “allege specific plausible facts that DeviantArt played any affirmative role in the scraping and using of [the plaintiffs’] registered works to create the Training Images.” And with regard to Midjourney, the court found that the claims were even more lacking and noted that “Plaintiffs need to clarify their theory against Midjourney—is it based on Midjourney’s use of Stable Diffusion, on Midjourney’s own independent use of Training Images to train the Midjourney product, or both?” The court granted leave to amend, and it will be interesting to see what facts are alleged in a subsequent complaint, which was due to be filed in late November 2023.
Two other cases, Silverman et al. v. OpenAI Inc. et al., No. 3:23-cv-03416 (N.D. Cal. July 7, 2023), and Kadrey et al. v. Meta Platforms, Inc., No. 3:32-cv-03416 (N.D. Cal. Nov. 20, 2023), have been grabbing headings recently due to their interesting facts, high-profile plaintiff, and active docket. In June and July of 2023, author and stand-up comedian Sarah Silverman, along with several other individuals, filed lawsuits against OpenAI, the developer of ChatGPT, and Meta Platforms Inc., alleging direct and vicarious copyright infringement, violations of the Digital Millennium Copyright Act, unfair competition, negligence, and unjust enrichment.
As in Thomson Reuters and Andersen, the lawsuits allege that the defendants violated the plaintiffs’ copyrights by using their literary works as training data. Also, the plaintiffs allege that they did not authorize the defendants to use their copyrighted works in AI-generated output, which the suits describe as infringing derivate works. The suit against OpenAI claims the platform used “shadow libraries” to train its large language model, GPT-3, employing thousands of copyrighted book titles.
In July 2023, the district court held that the authors’ cases against OpenAI, Inc., are related, and a month later, OpenAI filed a motion to dismiss all but the direct copyright infringement claim. In the motion, OpenAI argues that the authors’ claims are “unworkable” and “defective” because the claims incorrectly contend that every ChatGPT output is an infringing derivate work of their books. The authors filed their response brief on September 28, urging the district court to ignore “OpenAI’s meanderings,” as the pleadings provide sufficient facts supporting each element of the claims. The authors’ response brief heavily relies on a summary judgment opinion in Thomson Reuters, discussed above.
On November 20, 2023, District Judge Vince Chhabria granted Meta Platforms’ motion to dismiss and threw out all but one of the claims in the proposed class action brought by Silverman, Christopher Golden, and Richard Kadrey. The court found that “the remaining theories of liability, at least as articulated in the complaint, are not viable,” cited the Andersen case, and concluded that, to prevail on a theory that the LLaMA’s outputs are infringing derivate works, the authors would need to allege and ultimately submit proof that the outputs incorporate a portion of the books. In other words, the authors must show the similarities between the books and the outputs.
At a December 7, 2023, hearing in the Silverman v. OpenAI case, District Judge Araceli Martínez-Olguín stated that she was “moved” by Judge Chhabria’s decision in Kadrey and noted that during oral argument, counsel for the authors described “things that might be viable causes of action but that just weren’t present in the complaint.”
As a result, on December 13, 2023, the authors filed their first consolidated amended complaint, which now includes specific allegations of direct copyright infringement, detailing how, in training the LLaMA outputs, Meta Platforms copied works such as Silverman’s title Bedwetter.
The cases against OpenAI and Meta Platforms are quite active, and new developments continue to emerge.
In a twist on claims involving AI and purported intellectual property and violations of the Racketeer Influenced and Corrupt Organizations (RICO) Act, the plaintiffs in Perry v. Shein Distribution Corp., No. 2:23-cv-5551 (C.D. Cal. July 11, 2023), are a group of independent artists who claim that Shein stole their designs “over and over again, as part of a long and continuous pattern of racketeering,” by using its AI algorithms to create “exact copies” of their works. This is in contrast to Andersen, in which the AI-generated images are not replicas of the images that were used as the AI system’s training data.
Without giving a lot of detail about how Shein’s AI technology is supposed to work, the plaintiffs nevertheless assert that the system is “smart enough to misappropriate the pieces with the greatest commercial potential” and “astonishingly determine[] nascent fashion trends.” The RICO hook is based on allegations that Shein and associated entities have engaged in a corrupt pattern of copyright and trademark infringement.
It is also worth noting that plaintiffs are filing class action lawsuits against providers of AI technology for alleged violations of federal and state privacy statutes and various tort claims. In P.M. et al. v. OpenAI LP et al., No. 3:23-cv-03199 (N.D. Cal. June 28, 2023), class action plaintiffs had claimed that OpenAI and related entities violated federal privacy laws and improperly used their personal information. The lengthy complaint asserted 15 claims, including violations of the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act—federal statutes intended to address privacy, cybercrime, and related issues.
On September 15, 2023, the plaintiffs, without explanation, voluntarily dismissed their complaint without prejudice. The allegations in the case were nevertheless remarkable in that the plaintiffs predicted a dark future for AI, arising in part out of the alleged business practices of OpenAI and others. Among the plaintiffs were individuals who used various social media and AI platforms and alleged that the defendants misappropriated the information submitted to these platforms for their own purposes in a way that went well beyond their reasonable expectations. They asserted not only that the material allegedly appropriated by the defendants may be used to create harmful or illegal content but that it may lead to the “collapse of civilization as we know it.” The users’ entire private lives were purportedly at the mercy of OpenAI and its products. After painting this bleak picture, the 157-page complaint identified 6 separate classes of plaintiffs and a slew of statewide subclasses, and it asserted 15 separate claims. Some were brought under federal and state privacy laws, while others were state law claims based on theft, negligence, unfair business practices, and the like. Interestingly, the complaint refers to the alleged “theft of . . . copyrighted information,” but it did not include a claim for copyright infringement.
Conclusion
There have been significant developments involving the intellectual property issues relating to the use of generative AI in the last year. It will be exciting to see how courts and juries will address the panoply of issues raised by the use of this technology. At the same time, the regulatory landscape is taking shape while these litigations are playing out. Even though the grim picture painted by science fiction stories and some plaintiffs may not come to fruition, one thing is certain—the landscape of legal claims relating to AI and intellectual property will continue to evolve, much like the underlying technology that is the subject of these lawsuits.