chevron-down Created with Sketch Beta.

ARTICLE

The New Frontier: Artificial Intelligence–Enabled Medical Devices

Laura A Sexton

Summary

  • AI/ML-enabled medical devices, incorporating machine learning, are becoming prevalent in healthcare, leading to potential legal challenges.
  • The focus of the article is on whether these devices are considered "products" under state products liability statutes, given their potentially opaque nature.
  • The FDA has authorized over 600 AI/ML-enabled medical devices, predominantly in radiology, but none involving generative AI or artificial general intelligence.
  • Legal analysis draws parallels with prior cases involving software, digital applications, and technology, emphasizing the importance of distinguishing between "tangible" and "intangible" aspects and the nature of the provided service.
The New Frontier: Artificial Intelligence–Enabled Medical Devices
Andriy Onufriyenko via Getty Images

Medical devices are increasingly relying on software incorporating artificial intelligence (AI), including a subset of AI referred to as “machine learning” (ML). The rise of AI/ML-enabled devices is likely to create many thorny legal issues. This article focuses on one specific legal question: whether AI/ML-enabled medical devices are “products” that are subject to state products liability statutes. 

What Is AI/ML?

AI has been defined as a machine’s ability to perform the cognitive functions associated with the human mind (e.g., perceiving, reasoning, learning, and problem solving). Machine learning is a form of AI based on algorithms that are trained on data, which then detect patterns and make predictions and recommendations.

Machine learning varies in complexity. “Deep learning” is more advanced and involves “neural networks” that are based on how neurons function in the human brain. Machine learning has been used for a wide variety of medical applications, ranging from diagnosing diseases based on medical scans to predicting when equipment will require maintenance and to predicting how a device will behave under different conditions.

Generative AI is a form of AI that generates content in response to a prompt. ChatGPT, for example, can produce written content and then iteratively improve that content based on feedback. It is anticipated that generative AI will be used in the medical field for many purposes, including, e.g., producing higher-resolution versions of medical images and accelerating drug discovery.

AI/ML-enabled medical devices may raise unique legal questions because of the potentially opaque and black-box nature of the recommendations that AI/ML models generate. “You can’t just look inside a deep neural network and see how it works,” Will Knight from the Massachusetts Institute of Technology posited, because a “network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers.” Will Knight, “The Dark Secret at the Heart of AI,” MIT Tech. Rev., Apr. 11, 2017. Put differently, the “mechanisms behind [AI/ML] recommendations” may be “unknown and currently undiscoverable” because “an algorithm that cannot demonstrate the path to its conclusion is ultimately a black box.” Hannah R. Sullivan & Scott J. Schweikart, “Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?,” 21 AMA J. Ethics 160 (2019).

The FDA Is Authorizing More and More Medical Devices with AI/ML

The use of AI is increasing at a rapid pace. A 2022 survey by McKinsey & Co. across 19 industries found that the adoption of AI models and related investments have more than doubled since 2017. The Food and Drug Administration (FDA) has already reviewed and authorized over 600 medical devices with AI/ML (through 501(k) clearance, de novo requests, and premarket approval). These medical devices range in complexity from shallow models to deep learning models, and the latest trend is toward hybrid models that combine different algorithmic approaches.

Historically, radiology devices have dominated this space. That trend continued in 2022, with radiology devices making up 87 percent of the medical devices with AI/ML authorized by the FDA. Other categories of devices approved by the FDA included cardiovascular (7 percent), neurology (1 percent), hematology (1 percent), gastroenterology/urology (1 percent), ophthalmic (1 percent), clinical chemistry (1 percent), and ear/nose/throat (1 percent).

As of October 19, 2023, the FDA confirmed that it has not authorized any medical device that uses generative AI or artificial general intelligence (AGI) or that is powered by large language models.

State Products Liability Statutes Typically Apply to “Tangible” Products

Many states have enacted products liability statutes that impose strict liability for allegedly defective “products” in certain circumstances. A key question is whether AI/ML features or AI/ML-enabled medical devices will be considered “products” under these statutes.

The definition of “product” varies by state. See Bexis, Drug & Device Law, How the Fifty States View Electronic Data as a “Product” (July 31, 2023). Many states, however, define “products” to mean “tangible” goods. Id. This distinction comes from the Restatement (Third) of Torts, which defines a “product” as either “tangible personal property” that is “distributed commercially” or “other items” such as “real property and electricity” when “the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property.” Restatement (Third) of Torts § 19.

Courts generally agree that strict product liability should not apply to intangible words, expressions, and ideas. This is for good reason. As the Ninth Circuit explained in a landmark decision, “products liability law” is “focused on the tangible world.” Winter v. G.P. Putnam’s Sons, 938 F.2d 1033, 1035 (9th Cir. 1991). “We place a high priority on the unfettered exchange of ideas. . . . The threat of liability without fault . . . could seriously inhibit those who wish to share thoughts and theories.” Id.; see also Restatement (Third) of Torts § 19 reporter’s notes to cmt. d (collecting cases).

Moreover, “[c]ourts are unanimous in refusing to categorize commercially-provided services as products for the purposes of strict products liability in tort.” Restatement (Third) of Torts § 19 cmt. f. The Restatement makes clear that “services, even when provided commercially, are not products.” Id. § 19. Thus, “strict products liability does not extend to professionally-provided services, such as medical or legal help.” Id. § 19 cmt. f. That includes medical services like “x-rays” and “radiation therapy,” which are “treated as a service and do not support strict products liability in tort.” See id. § 19 reporter’s notes to cmt. c & cmt. f (collecting cases).

When Is Software or Technology a “Product”?

Few courts have directly evaluated whether AI/ML models are “products.” Nevertheless, much can be learned from prior case law regarding software, digital applications, websites, social media platforms, and online games.

Courts have repeatedly dismissed products liability claims involving software and technology where plaintiffs challenged “intangible” content, recommendations, information, expression, or ideas. In Rodgers v. Christie, for example, the Third Circuit was asked to decide a case involving a controversial “algorithm” used for risk assessment in bail determinations as part of New Jersey’s pretrial release system. The plaintiff asserted claims for design defect and failure to warn under the New Jersey Product Liability Act. The Third Circuit affirmed the dismissal of the complaint, holding that this algorithm was “neither ‘tangible personal property’ nor remotely ‘analogous to’ it.” 795 F. App’x 878, 880 (3d Cir. 2020). Product liability law was not the appropriate framework in that case because “‘information, guidance, ideas, and recommendations’ are not ‘products’ under the Third Restatement.” Id. (quoting Restatement (Third) of Torts § 19 cmt. d); see also Am. Online, Inc. v. St. Paul Mercury Ins. Co., 347 F.3d 89, 96 (4th Cir. 2003) (insurance policy for “physical damage to tangible property” did not cover claims related to “data and software, i.e., the abstract ideas, logic, instructions, and information”).

Similar issues have arisen in cases involving allegedly violent video game content. Courts have dismissed products liability claims alleging that violent content led users to engage in violence, reasoning that those claims do not challenge the “tangible physical characteristics” of video game products but instead challenge “the intangible thoughts, ideas and messages contained within the products.” James v. Meow Media, Inc., 90 F. Supp. 2d 798, 809 (W.D. Ky. 2000), aff’d, 300 F.3d 683 (6th Cir. 2002). “The line drawn in these cases is whether the properties of the item that the plaintiff claimed to have caused the harm was ‘tangible’ or ‘intangible.’” Wilson v. Midway Games, Inc., 198 F. Supp. 2d 167, 173 (D. Conn. 2002) (collecting cases); see also Quinteros v. InnoGames, No. C19-1402RSM, 2022 WL 898560, at *7 (W.D. Wash. Mar. 28, 2022) (dismissing product liability claim related to interactive online game), reconsideration denied, No. C19-1402RSM, 2022 WL 953507 (W.D. Wash. Mar. 30, 2022).

Courts also routinely dismiss products liability claims where the technology or software at issue offers a service rather than a product. Courts have held that certain social media platforms, internet marketplaces, and digital apps provide “services” rather than “products.” See Jackson v. Airbnb, Inc., 639 F. Supp. 3d 994, 1011 (C.D. Cal. 2022) (“Airbnb is a platform that connects users; it is more akin to a service than to a product.”); Doe v. Uber Techs., Inc., 2020 WL 13801354, at *6 (Cal. Super. Ct. 2020) (“The Court agrees with Uber that the Uber App is not a product” because “Plaintiffs were not acquiring ownership in the car they reserved or going to use the car. They were being driven from one location to another by the person who owned the car. That is a service.”); Grossman v. Rockaway Twp., 2019 WL 2649153, at *15 (N.J. Super. Ct. 2019) (dismissing complaint related to a digital app because there were “no facts alleged that would support the theory that [defendant’s] actions qualify or constitute a product under the Product Liability Act . . . rather than a ‘service’”); Eberhart v. Amazon.com, Inc., 325 F. Supp. 3d 393, 399–400 (S.D.N.Y. 2018) (“Amazon is better characterized as a provider of services.”) (collecting cases).

These holdings are not universal, however.While courts have not always reached uniform results, the following key considerations have emerged: (1) whether plaintiffs are challenging the software or technology based on its “tangible” characteristics, as opposed to its intangible content, and (2) whether the software or technology at issue provides a “service” rather than a product.

Application to Medical Devices with AI/ML

Given the range of AI/ML models being developed, it is unlikely that a single “one-size-fits-all” rule will apply to AI/ML-enabled medical devices. The potential applications for AI/ML are endless and might range from helping physicians diagnose diseases to improving the quality of medical scans and to helping surgeons understand how a device might perform in different scenarios. In some cases, AI/ML might simply be one of many tools used by physicians; on the other end of the spectrum, AI/ML-enabled devices could eventually substitute for traditional medical services. The potential fact scenarios are limitless.

Nevertheless, a few observations can be made based on existing case law. First, just as with any technology or software, courts will likely begin by evaluating whether plaintiffs are challenging the “tangible” aspects of a medical device or only its “intangible” characteristics. Take the example of an algorithm generated by AI/ML. Suppose a plaintiff files a lawsuit claiming that the algorithm in a medical device was defectively designed and, as a result, provided a faulty recommendation. That fact pattern may be analogous to prior cases claiming injuries from recommendations in books, which have overwhelmingly been deemed to be outside the scope of products liability law. See Restatement (Third) of Torts § 19 reporter’s notes to cmt. d & case citations (collecting cases).

Second, courts are “unanimous” that products liability law does not apply to “services.” Restatement (Third) of Torts § 19 cmt. f. Courts have repeatedly dismissed cases involving software and technology where a service, rather than a product, is being provided. To the extent that AI/ML-enabled medical devices might eventually serve as substitutes for traditional medical services, the same reasoning may apply.

Third, to the extent that AI/ML-enabled devices are unique, highly customizable, and not replicated in physical products, these characteristics may help to distinguish AI/ML-enabled devices from physical products and further weigh against the application of products liability law.

An important caveat, however, is that each AI/ML-enabled device is likely to have unique features that require a separate case-by-case analysis. Moreover, this article focuses specifically on the question of how state products liability statutes may or may not apply to AI/ML-enabled medical devices. Whether negligence law or other applicable law might apply to AI/ML-enabled medical devices is beyond the scope of this article. Finally, this is an emerging area of law, and the considerations employed by courts may change as the world learns more about AI/ML.

    Author