In the blockchain space, the government continues to push its view that all crypto other than Bitcoin is a security, and to pursue enforcement actions against retail exchanges and issuers. The spectacular collapse of Sam Bankman-Fried and FTX, coupled with the recent guilty plea to a federal charge by Changpeng Zhao (“CZ”), the founder of Binance, make clear that the decentralized promise of what blockchain can offer has, for the last several years, been waylaid by highly centralized projects that are susceptible to familiar economic shenanigans long present on Wall Street and in traditional finance. We will have to wait a bit longer, likely into 2025 and beyond, for the United States Supreme Court to decide if and to what extent securities laws apply to this technology, and whether or not various actions by agencies like the Securities and Exchange Commission are proper or have exceeded lawful bounds. However, when it comes to crypto, government agencies and the plaintiff’s bar will continue to aim their sights at those viewed as defrauding retail investors.
I am often asked to present CLEs that teach the law (and technology) of AI and blockchain to judges, practitioners, clients, and law students across the country. In October of 2023, I was invited to testify as an AI expert before a United States Senate AI Subcommittee hearing focused on responsibly legislating AI in the employment context, and I continue to represent clients, both domestically and internationally, in all aspects of AI and blockchain matters (from governance, oversight and compliance to litigation). Many people often ask me, “When will Congress pass comprehensive regulation in these areas?” My answer is that while I believe Congress will eventually act, in the meantime, the de facto initial legal rules will actually be developed by the decisions rendered in the nationwide trial and appellate courts confronted almost daily with these cases and the complicated issues they raise. For those of us who love and are intrigued by these technologies, it is our responsibility to understand the legal evolution of this exciting technology, and where possible, to help shape the law. Hopefully, this Chapter does its small part each year in educating and empowering business law practitioners—whether new to the field or experienced veterans—to participate in this fascinating and quickly developing ecosystem.
Like in prior years, we made certain judgments as to what should be included. We omitted cases decided prior to 2023 that were reported in previous iterations of the Chapter after evaluating whether there were any significant updates to those cases with respect to AI; in most cases there were not significant updates. And as AI is a subject of an exponential number of legislative proposals, we omitted the 2020–2022 legislative updates that were included in the Chapter in prior years and focus on legislative trends from 2023.
Finally, I want to thank my colleagues Adam Aft, Bryce Bailey, Cynthia Cole, Loic Coutelier, Alex Crowley, Lothar Determann, Rachel Ehlers, Jacqueline Gerson, Sinead Kelly, Mackenzie Martin, Avi Toltzis, and Jennifer Trock for their assistance in preparing this chapter.
We look forward to continuing to track the trends in AI and Blockchain for the next several years.
§ 8.2 Artificial Intelligence Cases of Note
§ 8.2.1 United States Supreme Court
There were no qualifying decisions by the United States Supreme Court in 2023.
Chief Justice Roberts, however, noted AI as the latest technological frontier in the 2023 Year-End Report on the Federal Judiciary. In a view of the current state of the potential for use of AI in the Federal Courts, Chief Justice Roberts notes “studies show a persistent public perception of a ‘human-AI fairness gap,’ reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out.”
§ 8.2.2 First Circuit
There were no qualifying decisions within the First Circuit in 2023.
Pending Cases of Note
Baker v. CVS Health Corporation 1:23-cv-11483 (D.Mass. Filed June 30, 2023). The plaintiff job applicant alleges that CVS contracts for HireVue, an AI-based job candidate screening tool, which includes video interview sessions that HireVue claims uses an applicants’ facial expressions to identify an applicant’s lies and embellishments. Plaintiff filed a putative class action suit on behalf of similarly situated job applicants, claiming CVS failed to provide candidates with written notice of its use of a lie detector test, as required by Massachusetts law. In February 2024, the court denied CVS’s partial motion to dismiss.
§ 8.2.3 Second Circuit
Doe v. EviCore Healthcare MSI, LLC, No. 22-530-cv, 2023 U.S. App. LEXIS 4794 (2d Cir. Feb. 28, 2023). The Second Circuit Court of Appeals affirmed district court dismissal of False Claims Act charges based on Rule 9(b), because plaintiffs failed to plead fraud with sufficient particularity. Plaintiffs asserted that service provider eviCore deployed artificial intelligence systems to approve health insurance requests based on flawed criteria and without manual review and that, as a result, eviCore provided worthless services to insurance companies and caused those insurance companies to bill the government for unnecessary and fraudulently approved medical services. The court held that “the services eviCore provided were not so worthless that they were the equivalent of no performance at all.”
In re Celsius Network LLC, 655 B.R. 301 (Bankr. S.D.N.Y. 2023). Creditor in bankruptcy dispute submitted a report from its valuation expert, Hussein Faraj, that was written by generative AI at Mr Faraj’s direction. The court found that while the written report was inadmissible as it lacked reliability, the expert’s live testimony could be given in a bench trial.
Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023). The United States District Court for the Southern District of New York sanctioned attorneys for misconduct because they included cites to non-existing cases in motions and made misleading statements to the court. Attorneys representing an individual plaintiff against an airline had used ChatGPT to research case law. ChatGPT delivered inaccurate output including citations to cases that did not exist. When opposing counsel called the existence of the cases in question, plaintiffs’ counsel went back to ChatGPT and asked for full copies of the cases. ChatGPT delivered excerpts of cases that did not exist, citing to other cases that did not exist.
Pending Cases of Note
Authors Guild, et al. v. OpenAI Inc. et al., No. 1:23-cv-8292 (S.D.N.Y. Filed Sept. 19, 2023). Plaintiff writers filed a putative class action against defendant AI developers, who created AI that can make derivative works based on, mimicking, summarizing, or paraphrasing plaintiffs’ works, without seeking permission or a license. The plaintiffs allege this conduct amounts to (1) direct infringement, (2) vicarious infringement, and (3) contributory infringement of their copyrights.
Basbanes et al. v. Microsoft Corp. el at., No. 1:24-cv-00084 (S.D.N.Y. Filed Jan. 5, 2024). Plaintiff journalists allege that the defendant AI developers use their written works to train generative AI models, constituting (1) direct infringement, (2) vicarious infringement, and (3) contributory infringement of the journalists’ copyrights.
Huckabee et al. v. Meta Platforms, Inc. et al., No. 1:23-cv-09152 (S.D.N.Y. Filed Oct. 17, 2023). Former Arkansas governor Mike Huckabee filed this action on behalf of a proposed class of authors in a copyright infringement suit against Meta, Microsoft, Bloomberg, and artificial intelligence research institute EleutherAI, claiming that the defendants trained their AI tools on data sets that comprised the 183,000 e-book “Books3” dataset, without the plaintiffs’ permission. The complaint alleges these actions constitute (1) direct copyright infringement, (2) vicarious copyright infringement, (3) removal of copyright management information in violation of DMCA, (4) conversion, (5) negligence, and (6) unjust enrichment. In December 2023, the Claims against Meta and Microsoft defendants were transferred to the Northern District of California to be consolidated with the Kadrey lawsuit (see below), with the claims against the Bloomberg defendants remaining in the Southern District of New York. Bloomberg has filed a motion to dismiss, in response to which the plaintiffs filed an amended complaint, withdrawing the indirect copyright infringement, DMCA and state-law claims.
Sancton v. OpenAI Inc. el al., No. 1:23-cv-10211 (S.D.N.Y. Filed Nov. 21, 2023). Plaintiff authors filed a putative class action suit against OpenAI challenging Chat-GPT and its underlying “large language models,” which use the copyrighted works of thousands of authors as a training dataset. Plaintiffs allege the training amounts to direct and contributory copyright infringement.
The New York Times Co. v. Microsoft Corp. et el., No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023). The New York Times sued Microsoft and OpenAI. The Times alleged that OpenAI created unauthorized reproductions of Times works during training the large language model ChatGPT, reproduced verbatim excerpts of Times content in response to user prompts, misappropriated referrals, and generated hallucinations that falsely attributed statements to the Times. The complaint alleges these actions constitute: (1) copyright infringement, (2) vicarious copyright infringement, (3) contributory copyright infringement, (4) removal of copyright management information in violation of DMCA, (5) common law unfair competition by misappropriation, and (6) trademark dilution. In February 2024, the defendants moved to dismiss parts of the direct infringement claims, as well as full dismissal of the contributory infringement, DMCA, and unfair competition claims.
§ 8.2.4 Third Circuit
There were no qualifying decisions within the Third Circuit in 2023.
§ 8.2.5 Fourth Circuit
Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc., No. 1:20-cv-613-SB, 2023 U.S. Dist. LEXIS 170155 (D. Del. Sep. 25, 2023). Plaintiff alleged defendant, an artificial intelligence startup, illegally infringed on plaintiff’s copyrighted content by using plaintiff’s content to train its machine learning search tool. The court largely denied five summary judgement motions filed by the parties, including (1) plaintiff’s copyright-infringement claim (but granting summary judgement only on element of “actual copying”), (2) cross-motions based on fair use (but granting plaintiff’s motion for summary judgment on defendant’s miscellaneous defenses), (3) plaintiff’s claim of tortious interference with contract (but granting partial summary judgment on two elements of tortious interference (existence of a contract and harm) of plaintiffs’ bot and password-sharing.); and (4) defendant’s claim of tortious interference (but granting defendant’s preemption defense with respect to plaintiffs’ anti-competition tortious-interference claim).
Recentive Analytics, Inc. v. Fox Corp., Civil Action No. 22-1545-GBW, 2023 U.S. Dist. LEXIS 166196 (D. Del. Sep. 19, 2023). The court granted defendant’s motion to dismiss claims that defendant used patented machine-learning systems to develop enhancements for scheduling and broadcasting of local programming. The court found that the claims were directed to patent-ineligible material, as both claims were directed to abstract ideas and the machine learning involved no inventive concept.
§ 8.2.6 Fifth Circuit
Commodity Futures Trading Comm’n v. Mirror Trading Int’l Proprietary Ltd., No. 1:22-cv-635-LY, 2023 U.S. Dist. LEXIS 76759 (W.D. Tex. Apr. 24, 2023). Court ruled the CFPB acted outside of its authority granted by Congress when it updated its examination manual for financial institutions to broaden its authority to regulate unfair, deceptive, or abusive acts to include discriminatory acts. For our purposes, the Court discussed CFPB’s authority to regulate new technologies, like AI, by including discrimination. If CFPB had been allowed to alter the manual to include discrimination, financial institutions may be more limited in how they can use new technologies, including those with algorithmic decision-making, as they would be required to provide explanations under the Equal Credit Opportunity Act.
§ 8.2.7 Sixth Circuit
Pending Cases of Note
McComb v. Best Buy Inc., No. 3:23-cv-28, 2024 U.S. Dist. LEXIS 8492, at *3 (S.D. Ohio Jan. 16, 2024). As part of its order granting leave to a pro se plaintiff to file a second amended complaint, the court required the plaintiff to file an affidavit “verifying that he has not used Artificial Intelligence (‘AI’) to prepare case filings” and prohibited all parties from using AI for the case. Penalties for use of AI in the case included sanctions, contempt, and dismissal of the case.
Bond v. Clover Health Invs., Corp., No. 3:21-cv-00096, 2023 U.S. Dist. LEXIS 24749, at *9–10 (M.D. Tenn. Feb. 14, 2023). The court granted a motion for class certification in relation to a claim that Clover Health Investments Corp. defrauded investors, in part based on false statements regarding use of Clover Health’s AI-powered software called Clover Assistant. The original case, Bond v. Clover Health Invs., Corp., 587 F. Supp. 3d 641 (M.D. Tenn. 2022), is discussed further in the 2023 version of this chapter at “Recent Developments in Artificial Intelligence 2023.”
Ruggierlo, Velardo, Burke, Reizen & Fox, P.C. v. Lancaster, No. 22-12010, 2023 U.S. Dist. LEXIS 160755, at *5 n.5 (E.D. Mich. Sep. 11, 2023). Pro se defendant cited non-existent cases in his objection to plaintiff law firm’s claims that defendant failed to pay his legal bills. The court avoided speculating whether the non-existent cases were from defendant’s “imagination, a generative artificial intelligence tool’s hallucination, both, or something else entirely.” In any event, the non-existent cases wasted time and resources, and destroyed defendant’s opportunity to state legitimate objections. The court warned that citing non-existent cases could lead to sanctions on the citing party.
In re Upstart Holdings, Inc. Sec. Litig., No. 2:22-cv-02935, 2023 U.S. Dist. LEXIS 175451, at *6, *36–*45, *73–*74 (S.D. Ohio Sep. 29, 2023). The court denied a motion to dismiss a securities fraud case relating to statements made about Upstart’s artificial intelligence-based lending platform. Some of Upstart’s statements went beyond puffery and were found to be material misstatements actionable under SEC Rule 10b-5 (prohibiting manipulative and deceptive practices), including statements having specific but inaccurate descriptions of how the AI model underlying its platform supposedly performed better than traditional FICO-based lending models.
Concord Music Grp., Inc. el al.. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. Filed Oct. 18, 2023). Several music publishing companies, led by Universal Publishing Group, sued Anthropic PBC, alleging that the artificial intelligence company infringes the plaintiffs’ copyrighted song lyrics with its Claude series of large language AI models without paying the same licensing fees as other lyrics aggregators do. The plaintiffs allege that Anthropic’s activities constitute (1) direct copyright infringement, (2) contributory infringement, (3) vicarious infringement, and (4) removal or alteration of copyright management information. In November 2023, the defendants filed 12(b)(2) and 12(b)(3) motions to dismiss, which were pending at the time of publication.
Barrows et al. v. Humana, Inc., No. 3:23-cv-00654 (W.D. Ky. Filed December 12, 2023). Class action plaintiffs allege that Humana has been using an AI system called nH Predict to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans and intentionally limits its employees’ discretion to deviate from the nH Predict AI Model predictions by to setting targets to keep stays at post-acute care facilities within 1% of those predicted by the AI model. According to the complaint these actions amount to a breach of contract, a breach of the implied covenant of good faith and fair dealing, unjust enrichment, violations of North Carolina’s unfair claims settlement practices and insurance bad faith.
§ 8.2.8 Seventh Circuit
Dinerstein v. Google, LLC, 73 F.4th 502 (7th Cir. July 11, 2023). The University of Chicago and its medical center provided several years of anonymized patient medical records to Google for the purpose of training algorithms that could anticipate future health needs in order to improve patients’ healthcare outcomes. The plaintiff brought a number of claims including breach of contract with respect to a privacy notice, unjust enrichment, tortious interference of contract, and intrusion upon seclusion. The Seventh Circuit affirmed dismissal of the plaintiff’s claims on the basis that the plaintiff lacked standing essentially due to the plaintiff’s failure to allege any plausible, concrete, or imminent injury (i.e., merely being included in an anonymized data set itself was insufficient to establish standing).
Frier v. Hingiss, No. 23-cv-0290-bhl, 2023 U.S. Dist. LEXIS 164077 (E.D. Wisc. Sept. 15, 2023). The Court identified briefing rife with errors that, due to hallucinations regarding case citations, the Court suspected may have been the result of AI, admonishing counsel: “To the extent the briefing was prepared using ‘artificial intelligence,’ counsel is reminded that he remains responsible for any briefing he files, regardless of the tools employed.”
Huskey v. State Farm Fire & Cas. Co., No. 22 C 7014, 2023 U.S. Dist. LEXIS 160629 (N.D. Ill. Sep. 11, 2023). Plaintiffs filed a class-action suit against Defendant, alleging Defendant’s use of machine learning to help detect fraud was biased against Black homeowners because it scrutinized certain claims more closely based on race which resulted in Black homeowners having to go through more hurdles when they submitted claims. One of Plaintiffs’ claims survived motion to dismiss, specifically a claim under §3604(b) of the Fair Housing Act. The Court also held that Plaintiff’s had sufficiently alleged a disparate impact claim because they cited statistical evidence and connected the evidence to the algorithms being used.
§ 8.2.9 Eighth Circuit
Pending Cases of Note
Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. (D.Minn. Filed November 11, 2023). Class plaintiffs accuse UnitedHealth of deploying the AI model, nH Predict, to override physicians’ judgment as to medically necessary case determinations and unlawfully deny patients care owed to them under their Medicare Advantage Plans. The complaint recites claims for breach of contract, a breach of the implied covenant of good faith and fair dealing, unjust enrichment, violations of Wisconsin’s unfair claims settlement practices and insurance bad faith. In February 2024, the defendants moved for dismissal for lack of jurisdiction. That motion is pending as of publication.
§ 8.2.10 Ninth Circuit
Andersen v. Stability AI Ltd., No. 23-cv-00201-WHO, 2023 U.S. Dist. LEXIS 194324 (N.D. Cal. Oct. 30, 2023). Putative class action on behalf artists against Stability AI—the developer of Stable Diffusion, an image generation AI tool—as well as against Deviant Art and Midjourney, both of which developed AI products incorporating or using Stable Diffusion. The complaint alleged that because Stable Diffusion was trained on plaintiffs’ works of art to be able to produce images in the style of particular artists, it constitutes direct and indirect infringement of the plaintiffs’ copyrights. Defendants’ motions to dismiss granted in respect of direct infringement claims against Deviant Art and Midjourney and all indirect infringement claims. Plaintiffs given leave to amend.
Doe v. Github, Inc., No. 22-cv-06823-JST, 2023 U.S. Dist. LEXIS 86983 (N.D. Cal. May 11, 2023). Software developers alleged that Github, an online hosting service for open source software projects, infringed their privacy and property interests, in addition to myriad other alleged violations under the Digital Millennium Copyright Act (DMCA), the Lanham Act and other laws, through its development and operation of Copilot and Codex. Copilot and Codex are artificial intelligence-based coding tools that employ machine learning algorithms trained on billions of lines of publicly available code, including plaintiffs’ code on Github repositories. The court dismissed the privacy and property rights claims on the basis that the plaintiffs lacked Article III standing because the allegations failed to establish that the plaintiffs had suffered injury, but allowed a claim seeking injunctive relief in respect of potential future harms. A claim that Github had unlawfully removed copyright management information in violation of DMCA also survived dismissal.
Newman v. Google LLC, No. CV 20-cv-04011-VC, 2022 U.S. Dist. LEXIS 238876 (N.D. Cal. Nov. 28, 2022). The court granted the defendant’s motion to dismiss the plaintiff’s claim that YouTube’s algorithm violates the promise in the Community Guidelines because it considers the plaintiffs’ individual characteristics when deciding whether to remove, restrict, or monetize content, in part because the complaint does not adequately allege that the plaintiffs have been treated differently based on those characteristics.
Newman v. Google LLC, No. CV 20-cv-04011-VC, 2023 U.S. Dist. LEXIS 144686 (N.D. Cal. Aug. 17, 2023). The court granted the defendant’s motion to dismiss the plaintiff’s claim that YouTube’s content-moderating algorithm discriminates against them based on their race (the plaintiffs are African American and Hispanic content creators) in violation of YouTube’s promise to apply its Community Guidelines (which govern what type of content is allowed on YouTube) to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation. The court found that plaintiffs had not adequately alleged the existence of a contractual promise.
Rivera v. Amazon Web Servs., No. 2:22-cv-00269, 2023 U.S. Dist. LEXIS 129517 (W. D. Wash. Jul. 26, 2023). The District Court for the Western District of Washington denied defendant’s motion to dismiss plaintiff’s claim that defendant’s facial recognition software (using biometric data) used by defendant’s clients without said clients properly notifying members of the public of said use.
Mobley v. Workday, Inc., No. 23-cv-00770-RFL, 2024 U.S. Dist. LEXIS 11573 (N.D. Cal. Jan. 19, 2024). Plaintiff, an African-American man over the age of 40 with anxiety and depression, applied for 80 to 100 jobs with companies that use the defendant’s applicant screening tools. Mobley alleged that the screening tools that Workday offers employers discriminate on the basis of age, race, and disability. The court granted Workday’s motion to dismiss on the basis that the plaintiff had failed to exhaust his remedies with the Equal Employment Opportunity Commission as to his intentional discrimination claims and because the factual allegations of the complaint were insufficient to demonstrate that Workday is an “employment agency” under the anti-discrimination statutes at issue.
Pending Cases of Note
Jobiak, LLC v. Botmakers LLC, No. 2:23-cv-08604-DDP-MRW (N.D. Cal. Filed Oct. 12, 2023). Plaintiff AI-based recruitment platform alleges the defendant has been “scraping” job posting data from plaintiff’s proprietary database and incorporating its contents directly into its own job listings. The complaint alleges these actions amount to: (1) copyright infringement (2) violations of the Computer Fraud and Abuse Act, (3) violations of the California Comprehensive Computer Access and Fraud Act, (4) violations of the California Unfair Competition Act, and (5) the removal of copyright management information under DMCA § 1201.
Kadrey et al., v. Meta Platforms, Inc., No. 3:23-cv-03417 (N.D. Cal. Filed July 7, 2023). Three authors filed a putative class action suit against Meta challenging LLaMA, a set of large language models trained in part on copyrighted books, including plaintiffs’. Unlike GPT models, LLaMA is “open source” and allows developers to create variations for free. Plaintiffs alleged, on behalf of all those similarly situated, the following causes of action: (1) direct copyright infringement, (2) vicarious copyright infringement, (3) Removal of copyright management information under DMCA § 1202(b), (4) unfair competition under, (5) negligence, and (6) unjust enrichment. In November 2023, the court dismissed all claims with leave to amend except for the negligence claim which was dismissed with prejudice.
T. et al. v. OpenAI LP et al., No. 3:23-cv-04557-VC (N.D. Cal. Filed Sept. 5, 2023). This class action lawsuit arises from defendants’ unlawful and harmful conduct in developing, marketing, and operating their AI products, including ChatGPT-3.5, ChatGPT-4.0, 4 Dall-E, and Vall-E (the “Products”), which use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge. Plaintiffs seek relief under (1) the Electronic Communications Privacy Act, (2) the Computer Fraud and Abuse Act, (3) the California Invasion of Privacy Act, (4) the California Unfair Competition Law, (5) negligence, (6) invasion of privacy, (7) intrusion upon seclusion, (8) larceny/receipt of stolen property, (9) conversion, (10) unjust enrichment, and (11) New York’s General Business Law. The defendants filed motions to dismiss which were pending at the time of publication.
Tremblay, et al. v. OpenAI, Inc., et al., No. 3:23-cv-03223-AMO (N.D. Cal. Filed June 28, 2023). Two authors filed a class action suit against OpenAI challenging Chat-GPT and its underlying large language models, GPT-3.5 and GPT-4, which use the copyrighted works of thousands of authors as a training dataset. The training dataset allows the GPT programs to produce written text. The plaintiffs allege, on behalf of similarly situated authors of copyrighted works used to train the GPT models, that such conduct constitutes: (1) direct copyright infringement, (2) vicarious copyright infringement, (3) Removal of copyright management information under DMCA § 1202(b), (4) unfair competition under CA law, (5) negligence, and (6) unjust enrichment.
Main Sequence, Ltd. v. Dudesy, LLC, No. 2:24-cv-00711 (C.D. Cal Filed January 25, 2024). The estate of the late comedian George Carlin filed suit against the defendant media company alleging that the defendant employed generative AI to create a script for a fake comedic special featuring Carlin and used voice generation tools to create a “sound-alike” to perform the generated script. The complaint alleges these acts violated copyright and Carlin’s posthumous right of publicity.
Matsko v. Tesla 4:22-cv-05240 (N.D. Cal. Filed September 14, 2022). Putative class plaintiffs allege that defendant Tesla’s representations concerning its automobiles’ “Autopilot,” “Enhanced Autopilot,” and “Full Self-Driving Capability” features mislead consumers about the company’s autonomous driving capabilities and therefore violate the Magnuson-Moss Warranty Act, the California Unfair Competition Law, the California Consumer Legal Remedies Act, false advertising standards, along with breaches of express and implied warranties. In September 2023, the court granted the defendants motion to dismiss with leave for the plaintiff to amend its complaint.
Faridian v. DoNotPay, Inc. 3:23-cv-01692 (N.D. Cal. Filed March 3, 2023). Class action plaintiff brought this action against DoNotPay, which bills itself as the “world’s first robot lawyer,” alleging that the defendant violated California’s e Unfair Competition Law by engaging in the unauthorized practice of law.
§ 8.2.11 Tenth Circuit
There were no qualifying decisions within the Tenth Circuit in 2023.
§ 8.2.12 Eleventh Circuit
Athos Overseas Ltd. Corp. v. Youtube, Inc., No. 21-21698-Civ, 2023 U.S. Dist. LEXIS 85462 (S.D. Fla. May 16, 2023). Plaintiff, video producer who holds the title to Mexican films allegedly uploaded to Defendant YouTube without authorization, contends that YouTube violated the Digital Millennium Copyright Act (DMCA), and therefore abandoned its safe harbor protections by its failure to employ advanced video detection software (Content ID) to identify infringing videos uploaded to the platform. The plaintiff argued that because YouTube has access to its automated software that scans uploaded videos to identify infringing content, it has knowledge of every infringing video on its platform. Court granted YouTube’s summary judgment motion, citing Second and Ninth Circuit decisions rejecting suggestions that an ISP can have specific knowledge of non-noticed infringements simply because they have access to video surveillance capabilities.
United States v. Grimes, No. 1:20-CR-00427-SCJ, 2023 U.S. Dist. LEXIS 40282, at *1 (N.D. Ga. Mar. 10, 2023). During his entry into the US at the Atlanta airport, defendant was flagged by facial recognition software used during preliminary entry processing, which identified him as being potentially linked to child sexual exploitation material. Based on this, officers undertook a search of the defendant and found recently deleted photos of child pornography on the defendant’s electronic devices. Defendant moved to suppress evidence obtained from this search on the basis that the officers lacked a reasonable suspicion to conduct the searches, because the Government had provided no evidence on the reliability of the facial recognition software. The court rejected the motion (in part) because, absent some suggestion that the facial recognition system is unreliable, it is a sufficient basis for reasonable suspicion.
§ 8.2.13 D.C. Circuit
Thaler v. Perlmutter, Civil Action No. 22-1564 (BAH), 2023 U.S. Dist. LEXIS 145823 (D.D.C. Aug. 18, 2023). Court granted a motion for summary judgment filed by Perlmutter, Register of Copyrights and Director of the United States Copyright Office, finding that the Copyright Office properly denied copyright registration for a piece of visual art autonomously created by a computer algorithm running on a machine (i.e., wholly created by AI) because U.S. copyright law protects only works of human creation. The court noted that copyright has never stretched so far as to protect works generated by new forms of technology operating absent any guiding human hand. Human authorship is a bedrock requirement of copyright.
USA v. Michel, Dist. Ct. D.C., No. 1:19-cr-00148, January 11, 2024. Michel, a former member of the Fugees, was convicted in April of 2023 on 10 criminal counts, including waging a back-channel lobbying campaign to end an investigation of Malaysian tycoon Jho Low. Michel’s new legal team is seeking a new trial based on assertions that his prior lawyer used an experimental Gen AI program to draft his closing argument, and failed to disclose that he had a financial stake in the company that developed it.
§ 8.2.14 Court of Appeals for the Federal Circuit
There were no qualifying decisions within the Court of Appeals for the Federal Circuit in 2023.
§ 8.3 Administrative
§ 8.3.1 Patent Trial and Appeal Board
Ex parte Iaremenko et al., 2022 Pat. App. LEXIS 5639 (PTAB Nov. 22, 2022). The USPTO Patent Trial and Appeal Board sustained an Examiner’s decision to reject claim 1 under 35 U.S.C. § 112 for lacking adequate written description support, where the relevant claim language recited a “PLD machine learning module . . . configured to detect an anomaly in at least one . . . of the ingress traffic and the egress traffic, and to send an anomaly indication to the PLD firewall.” (Additional rejections were sustained as well, but this summary focuses on the Section 112 issue pertaining to machine learning.) The applicant argued that the specification described “how the PLD machine learning process is trained and how anomalies are defined.” But the Board nonetheless determined that the applicant failed to address the Examiner’s finding that the written description did not include the specific technique (i.e., classification, regression, dimensionality reduction, clustering) used to identify deviations in patters relating to the ranges of various ingress and/or egress parameters. “In other words, Appellant’s argument that the machine learning module is defined by its training process does not apprise us of error in the Examiner’s finding that the training process disclosed in the Specification defines the patterns against which anomalies may be identified but fails to describe how (or by what technique) the PLD machine learning module learns such patterns in the parameters during the simulated transactions learning process.”
§ 8.4 Legislation
With the wider adoption of AI tools by both consumers and businesses as well as a growing awareness of the risks and downsides of these tools, 2023 witnessed a profusion of legislative proposals emerge at both the federal and state levels. As the EU moved closer toward the enactment of its AI Act, a comprehensive legislative package that will regulate AI across a wide range of industries and use cases, the approach in the US has been comparatively piecemeal and tentative. However, in the absence of a monolith like the AI Act, proposed legislation addressing specific aspects or risks associated with AI have proliferated in US legislatures.
Although it would be beyond the scope of this chapter to list out each of the scores of proposed statutes and regulation exhaustively, several broad trends are noteworthy.
§ 8.4.1 Policy and Governance
Some of the most prominent legislative activity has concerned the larger policy and governance issues that have emerged with the new technology. Among the common features of these laws has been the establishment of new bodies to oversee and regulate the activity of both private and public actors utilizing AI. For example, the Digital Platform Commission Act of 2023 (S.1671), introduced in the Senate in May 2023 and subsequently referred to the Committee on Commerce, Science, and Transportation, would mandate the establishment of a “Federal Digital Platform Commission” to provide comprehensive regulation of digital platforms and AI products, with the intention of protecting consumers, promoting competition, and safeguarding public interest. The proposed five-member Commission would have the power to hold hearings, conduct investigations, levy fines, and engage in public rulemaking to establish regulations for digital platforms. Other similar federal proposals include the ASSESS AI Act of 2023 /Assuring Safe, Secure, Ethical, and Stable Systems for AI Act (S.1356) (aiming to establish a cabinet-level AI Task Force to identify existing policy and legal gaps in the federal government’s AI policies), the National AI Commission Act (H.R.4223) (seeking to create a bi-partisan independent commission within the legislative branch focused on AI called the “National AI Commission”). Similarly, some bills have sought to situate policymaking frameworks within the existing regulatory apparatus, such as the Oversee Emerging Technology Act of 2023 (S.1577), which would mandate that certain federal agencies appoint a senior official as an emerging technology lead to advise on the responsible use of emerging technologies, including AI and to offer expertise on policies and practices, collaborate with interagency coordinating bodies and contribute input for procurement policies. These themes have extended to state legislation as well; Illinois House Bill 3563, passed in August 2023, establishes a Generative AI and Natural Language Processing Task Force to investigate and report on generative artificial intelligence software and natural language processing software.
At the state level, a related theme has been the oversight of the procurement and deployment of AI systems by state agencies. California Assembly Bill AB302, enacted in October 2023, requires the Department of Technology, in coordination with other interagency bodies, to conduct, on or before September 1, 2024, a comprehensive inventory of all high-risk automated decision systems proposed for use, development, or procurement by, or are being used, developed, or procured by, state agencies. Likewise in June 2023, Connecticut passed An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy (S1103) , which requires annual audits of AI systems used by state agencies and the establishment of policies regarding state agency use of AI systems. Texas’ House Bill 2060, enacted in June 2023, combines these characteristics, both calling for the creation of a seven-member Artificial Intelligence Advisory Council to study the development of AI, as well as requiring state agencies to submit inventory reports of automated decision systems they use.
§ 8.4.2 Algorithmic Accountability
Another significant legislative theme in 2023 has been the prevention of discrimination and other harms that may arise from our growing reliance on algorithms to inform, drive, and in some cases supplant human decision-making.
At the federal level, the White House led the charge by issuing an Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government in February, which directs federal agencies to develop and use artificial intelligence in ways that advance equity and root out bias. The AI Accountability Act (H.R.3369), introduced in the House in May 2023 and subsequently referred to the Committee on Energy and Commerce, directs the Assistant Secretary of Commerce for Communications and Information to conduct a comprehensive study on accountability measures for AI systems.
State legislatures largely focused on the potential introduction algorithmic bias from the use of automated decision systems in the provision of particular services, especially financial services and healthcare, or in employment contexts. New Jersey’s Senate Bill S1402, introduced in February 2023, would prohibit financial institutions from using automated decision-making tools to discriminate against members of a protected class in making decisions regarding the extension of credit or eligibility for insurance or health care services. Under Illinois’ HB 3773, introduced in February 2023, employers that use predictive data analytics in their employment decisions would be restricted from using race (or zip code when used as a proxy for race) in employment decisions. California’s AB1502, introduced in February 2023, would prohibit health care service plans from discrimination on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.
Some legislatures took aim more broadly at algorithmic bias by considering mandates that wouldn’t be limited to particular use cases or services. These include Washington DC’s Stop Discrimination by Algorithms Act (B25-0114) introduced in February 2023, which would prohibit the use of algorithmic eligibility determination in a discriminatory manner.
§ 8.4.3 Transparency
A related area of legislative interest concerns laws that promote transparency in the use of AI. These laws often require disclosures to customers or the public of an organization’s deployment of AI and can apply in a variety of contexts. Some laws may also include provisions requiring audits or human oversight of some AI functions, especially where the AI is being used to assist processes that produce significant effects on a person’s rights or interests.
At the federal level, the AI Labeling Act of 2023 (S. 2691), introduced in July 2023, would mandate clear disclosures for all AI-generated content including images, videos, audio, multimedia, and text and outlines obligations for developers and licensees of generative AI systems to prevent the removal of these disclosures. Likewise, the proposed REAL Political Advertisements Act of 2023 (S.1596) would mandate the inclusion of a disclaimer in political advertisements that utilize AI to generate images or video content and seeks to increase transparency and accountability in political campaigns and advertisements that make use of AI.
State lawmakers have also sought to promote transparency in the use of AI across a variety of contexts. California’s AB 331 on automated decision tools, introduced in January 2023, would require deployers to disclose automated decision tools that are used to make a consequential decision to individuals subject to such decisions. New York’s A7858, introduced in July 2023, would amend the labor law to require disclosure when an employer uses an automated employment decision tool to screen candidates. In Massachusetts, HB1974 was introduced in February 2023 and would require mental health care professionals who use AI to provide mental health services to inform patients of such use. Illinois’ legislature introduced the Artificial Intelligence Consent Act (HB3285) in February 2023, requiring that a person using artificial intelligence to mimic or replicate another’s voice or likeness in a manner that would otherwise deceive an average viewer to provide a disclosure upon publication, unless the person whose voice or likeness is being mimicked consents.
§ 8.4.4 Other
As the adoption of AI has come to permeate ever-wider areas of human activity, so too the scope of proposed AI regulation has grown to encompass areas not previously associated with AI.
On the dawn of a momentous general election—and with tremendous attention being paid to the fairness and security of elections—2023 marked the arrival of AI considerations to voting laws. In one of the few state measures to actually pass in 2023, in April 2023, the Arizona legislature approved Senate Bill 1565, which would restrict the use of AI or learning hardware, firmware, or software in voting machines. Despite its passage, Arizona’s governor vetoed the legislation. Another focus area has been the potential use of synthetic media, that is AI-generated video, audio, or images to mimic candidates and deceive or manipulate voters. For example, Illinois Senate Bill 1742, introduced in February 2023, would amend the election code to make it a misdemeanor for a person to create a “deepfake” video and cause the deep fake video to be published or distributed within 30 days of an election with the intent to injure a candidate or influence an election.
Some proposed laws have sought to mitigate potential harms from “deepfakes” more generally. The No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024 (No AI FRAUD Act), introduced in January 2024, would establish civil liability for those who publish unauthorized digital depictions or who distribute “a personalized cloning service.” A more focused federal proposal in the form of the Do Not Disturb Act, also introduced in January 2024, would crack down on robocalls using digitally emulated voices and strengthen the Telephone Consumer Protection Act (TCPA) protections against such conduct. Notably, in February, the Federal Communications Commission issued an opinion clarifying that the TCPA does indeed apply to the use of AI to make a robocall.
State lawmakers have also focused attention on the risks around “deepfakes.” Foreshadowing some of the provisions of the No AI FRAUD Act proposals, Illinois’ legislature introduced the Artificial Intelligence Consent Act (HB3285) in February 2023, which would require a person using artificial intelligence to mimic or replicate another’s voice or likeness in a manner that would otherwise deceive an average viewer to provide a disclosure upon publication, unless the person whose voice or likeness is being mimicked consents. Louisiana’s SB175, enacted in August 2023, criminalizes the creation (or possession) of deepfake videos depicting minors engaged in sexual acts.
Myriad other areas of activity have also been subject to proposed AI legislation. For instance, Illinois’ Anti-Click Gambling Data Analytics Collection Act, introduced in February 2023, would restrict online gambling platforms from collecting data from gamblers with the intention of using the data to predict their gambling behavior. A trio of New Jersey bills introduced in 2023 and aimed to assuage fears over the potential loss of jobs due to the replacement of human labor with automated processes. These bills proposed measures including requiring the Department Of Labor And Workforce Development to track job loss due to automation (Assembly Bill 5150), mandating tuition-free enrollment in public universities to students impacted by automation (Assembly Bill 5224), and providing tax relief for employers who hire worker affected by automation-related job loss (Assembly Bill 5451). And in New Hampshire, House Bill 1599, proposed in January 2024, seeks to affirm the right to use autonomous artificial intelligence for personal defense.
§ 8.5 Blockchain Cases of Note
SEC v. Celsius Network Limited. et al., No. 1:23-cv-6005, filed 7/13/2023. The SEC alleges that Celsius and its founder and CEO influenced the price of the CEL token through unregistered and fraudulent offers and sales through its “Earn Interest Program” to fraudulently raise billions of dollars from investors.
SEC v. Coinbase, No. 1:23-cv-04738, filed on 6/06/2023. The SEC charged Coinbase as operating as an unregistered broker through offering Coinbase Prime and Coinbase Wallet, and offering a staking program without first registering with the SEC. The SEC also alleged Coinbase operated a trading platform allowing U.S. customers to buy, sell, and trade cryptocurrency without registering with the SEC as a broker, national securities exchange, or clearing agency.
SEC v. Justin Sun, et al., No. 1:23-cv-02433, filed on 3/03/2023. Justin Sun and three of his companies, Tron Foundation Limited, BitTorrent Foundation Ltd., and Rainberry Inc., were charged with offering and selling Tronix (TRX) and BitTorrent (BTT) without registering, and for manipulating the secondary market for TRX through wash trading. Eight celebrities were charged in connection with touting TRX and/or BTT without publicly disclosing that they were compensated for doing so.
SEC v. Avraham Eisenberg, No. 1:23-cv-00503, filed on 1/20/2023. The SEC charged Eisenberg for organizing an attack on Mango Markets, a cryptocurrency trading platform, through the MNGO governance token, which is offered and sold as a security.
SEC v. Genesis Global Capital, LLC and Gemini Trust Company, LLC, No. 1:23-cv-00287, filed on 1/12/2023. The SEC alleges that, through the Gemini Earn cryptocurrency asset lending program, Genesis and Gemini engaged in the unregistered offer and sale of securities to U.S. retail investors.
James v. Mek Global Limited and Phoenixfin PTE Ltd d/b/a KuCoin, No. 1:20-cv-02806-GBD-RWL, filed on 3/09/2023. New York Attorney General Letitia James sued KuCoin for failing to register with the State of New York prior to allowing investors to buy and sell cryptocurrencies on its platform.
SEC v. LBRY, Inc., No. 1:21-cv-00260- PB, filed on 3/29/2021, appeal filed on 8/8/2023. The SEC alleged LBRY sold unregistered securities when it issued its own token, LBC, and received approximately $12.2 million in proceeds. Judge Barbadoro of the United States District Court for the District of New Hampshire ordered LBRY to pay a civil penalty of $111,614 and permanently enjoined LBRY from further violations of the registration provisions of the federal securities laws and from participating in unregistered offerings of crypto asset securities.
§ 8.5.1 Sam Bankman-Fried’s Conviction
In 2022, the SEC charged Sam Bankman-Fried with violating Section 17(a) of the Securities Act, and 10(b)(5) of the Exchange Act for organizing a scheme to defraud equity investors in FTX Trading Ltd., a crypto trading platform which Bankman-Friend was CEO and co-founder.
While promoting FTX as a safe crypto asset trading platform, Bankman-Fried improperly used FTX customers’ funds for his privately-held crypto hedge fund, Alameda Research LLC, and gave Alameda special treatment on the FTX platform.
Bankman-Fried also failed to inform investors of the risk from FTX’s exposure to Alameda’s holdings of overvalued, illiquid assets, such as FTX-affiliated tokens.
In November of 2023, Bankman-Fried was convicted, and his sentencing is scheduled for March 28, 2024.
§ 8.5.2 Indictment of CZ from Binance
In June of 2023, the SEC brought 13 charges against Binance Holdings Limited, and its CEO, Changpeng Zhao for violations of federal securities laws. Binance operates the largest crypto asset trading platform in the world.
The SEC alleged that Binance and Zhao conducted the unregistered offer and sale of crypto assets and mislead investors.
Binance and Zhao claimed U.S. customers were restricted from Binance.com, but Zhao and Binance secretly allowed high-value U.S. customers to continue trading on the platform. Also, Zhao and Binance improperly exercised control of customers’ assets, which were then commingled and diverted, including to Zhao’s own entity, Sigma Chain.
In November of 2023, Binance pleaded guilty and agreed to pay over $4 billion for violations of the Bank Secrecy Act, failure to register as a money transmitting business, and the International Emergency Economic Powers Act.
Zhao, a Canadian citizen, pleaded guilty for failing to maintain an effective anti-money laundering program in violation of the Bank Secrecy Act and stepped down as Binance’s CEO.
§ 8.5.3 Recent Developments from Federal Agencies
On January 3, 2024, the FDIC issued a joint statement on crypto-asset risks to banking organizations. See Joint Statement on Crypto-Asset Risks to Banking Organizations (fdic.gov).
The joint statement highlights what the FDIC perceives to be the key risks associated with crypto-assets and crypto-asset sector participants, including legal uncertainties related to custody practices, redemptions, and ownership rights and inaccurate or misleading representations and disclosures by crypto-asset companies, for example.
On January 10, 2024, SEC Chair Gary Gensler announced the Commission’s approval of the listing and trading of a number of spot bitcoin exchange-traded product (ETP) shares. See SEC.gov | Statement on the Approval of Spot Bitcoin Exchange-Traded Products.
On February 29, 2024, the House Financial Services Committee advanced a bipartisan measure to eliminate a 2022 SEC staff accounting bulletins on accounting for custodied crypto assets. See McHenry Delivers Opening Remarks at Markup of Fintech, Housing, and National Security Legislation | Financial Services Committee (house.gov).