Strict Liability
Manufacturers or developers may be held to a strict liability standard when a product is sold or used on a patient that: (1) contains a defect that is unreasonably dangerous; (2) the defect causes harm to the consumer; and (3) the harm results in actionable injury.[2] For strict liability claims, the plaintiff does not need to prove the defendant acted carelessly. Rather, the plaintiff only needs to show a product defect at the time of sale that caused the plaintiff’s harm, regardless of fault.[3] Manufacturers of prescription drugs and medical devices that may be legally sold or otherwise distributed pursuant to a healthcare provider’s prescription are liable when defects cause harm to patients.[4]
The defect can be in the design, manufacture, or labeling of the product.[5] A design defect may be found where a product is in a “defective condition unreasonably dangerous to [a] user or consumer” or if the “foreseeable risks of harm could have been reduced by the adoption of a reasonable alternative design.”[6] A medical drug or device is defectively designed if the foreseeable risks of harm posed by the drug or device are sufficiently great in relation to its foreseeable therapeutic benefits such that reasonable providers would not prescribe it to any class of patients.[7] A manufacturing defect occurs when the manufactured product does not conform to the manufacturer’s own specifications or requirements and causes harm.[8] For example, a defibrillator with defective wire insulation that could trigger electric shocks in a patient may constitute such a defect. Additionally, a labeling defect can lead to a failure to warn claim, based on the “manufacturer’s failure to provide adequate warnings to the consumer of dangers inherent to the product or to provide instructions for the safe use of the product.”[9] In the healthcare context, because the law accepts that prescription medical products have inherent and unavoidable risks and require physician approval prior to use, warnings or instructions are inadequate where they fail to reasonably disclose risks to prescribing and other healthcare providers who are in a position to reduce risks of harm.[10]
Breach of Warranty
Plaintiffs can also bring products liability claims under breach of warranty theories, which provide an independent basis of liability under the Uniform Commercial Code (UCC).[11] The elements for breach of warranty are similar to strict liability and plaintiffs can assert them in jurisdictions that do not recognize strict products liability as a cause of action.[12] Breach of express warranty claims arise when the seller makes “express” guarantees that the product will perform in a particular way. Such an express warranty is usually set forth in a sales contract, but the seller can also convey express warranties through oral representations about the product.
The UCC also recognizes two implied warranties: (1) the implied warranty of merchantability and (2) the implied warranty of fitness for a particular purpose. Manufacturers may be held liable for breach of warranty of merchantability when the manufacturer sells a product to a consumer that is not reasonably fit for the purposes or intended use for which that product or similar products are sold. Similar to a manufacturing defect claim, an implied warranty of merchantability claim arises when the manufacturer sells a product that departs from the manufacturer’s specifications, regardless of fault.[13]
Claims for breach of the implied warranty of fitness for a particular purpose arise when a seller knows a consumer is buying a product for a specific purpose, the seller knows the consumer relies on the seller’s skill and expertise in choosing the right product for that intended purpose, and the product is not appropriate for that purpose.[14] For this type of claim, the product does not need to be defective for a seller to violate the implied warranty of fitness.
Negligence
A showing of negligence requires a plaintiff (e.g., patient) to show that a defendant (e.g., the physician and/or manufacturer): (1) had a duty of care to the consumer; (2) breached the duty by subjecting the consumer to unreasonable risk of harm; (3) caused the consumer harm; and (4) the harm resulted in an actionable injury.[15] A plaintiff must generally prove the defendant acted unreasonably considering foreseeable risks, and the standard is based on what an objective and hypothetical “reasonable” person would have done under the circumstances.[16] Negligence therefore rests on a showing of fault leading to product defect.[17] For a claim of negligence to succeed in the context of medical AI, the plaintiff would likely focus on the manufacturer’s development, validation, or manufacturing processes; appropriateness of clinician training; effectiveness of the instructions for use, warnings, or other disclaimers; or postmarket adverse event and response (e.g., consumer warning, recall) processes.[18]
Common Defenses
Learned Intermediary
Despite the existence of these and other product liability pathways, several courts have adopted the learned intermediary doctrine as a defense to the rule that manufacturers have a duty to warn patients directly about the risk of certain products.[19] The doctrine protects manufacturers of prescription drugs and restricted (e.g., prescription) medical devices from product liability arising under failure to warn theories if the manufacturer provided adequate warning to the prescribing physician about the product’s risks. If the defense is successful, the patient’s remedy is limited to a medical malpractice or negligence claim against the physician.
The strength of the learned intermediary defense has eroded in recent years.[20] Several courts have refused to apply the learned intermediary doctrine in personal injury actions involving prescription drugs and devices that were aggressively marketed to the public through direct-to-consumer advertisements, the Internet, and other forms of media.[21] Courts have found that where the manufacturer’s market efforts “drown-out” warnings and limitations on use, the learned intermediary doctrine may not insulate manufacturers from liability.
Preemption
The Federal Food, Drug, and Cosmetic Act’s (FDCA) medical device preemption provision has been used in some product liability actions to limit negligence or failure to warn theories arising under state laws.[22] In Riegel v. Medtronic, Inc.,[23] the Supreme Court held that 21 U.S.C. § 360k(a) bars common law claims against a manufacturer that challenge the safety or effectiveness of a Class III medical device if the device received premarket approval from the Food and Drug Administration (FDA).[24] For example, in the context of prescription drugs, the FDA requires the manufacturer to put specific warning language on the label. If the manufacturer uses that language, the FDCA preempts a plaintiff’s claim for failure to warn of a dangerous condition or side effect.[25] Several judicial decisions have limited the scope and applicability of medical device preemption to high risk devices that have undergone premarket approval application (PMA) approval.[26] There are also exceptions to preemption, such as if the drug manufacturer knew of adverse effects and did not disclose those effects to the FDA.
The Biomaterials Access Assurance Act (BAAA) also preempts state and federal claims against biomaterials suppliers who provide component parts to manufacturers of implant medical devices.[27] The BAAA defines an implant as
(A) a medical device that is intended by the manufacturer of the device . . . (i) to be placed into a surgically or naturally formed or existing cavity of the body for a period of at least 30 days; or (ii) to remain in contact with bodily fluids or internal human tissue through a surgically produced opening for a period of less than 30 days; and (B) suture materials used in implant procedures.[28]
The BAAA defines a component as “a manufactured piece of an implant” and includes “a manufactured piece of an implant” that “(i) has significant non-implant applications; and (ii) alone, has no implant value or purpose, but when combined with other component parts and materials, constitutes an implant.”[29]
Application of Product Liability Theories to Healthcare AI
Product versus Software
A threshold question in determining whether traditional theories of product liability apply to medical AI is whether the AI system is considered a “product” and thus subject to strict liability and/or breach of warranty. If the AI system is considered merely a service tool that assists healthcare providers in making treatment decisions, then liability would likely require analysis under a negligence theory of liability.[30] The Restatement (Third) of Torts: Product Liability, defines “product” as “tangible personal property distributed commercially for use or consumption” or any other item whose “context of . . . distribution and use is sufficiently analogous to [that] of tangible personal property.”[31] Relying on this definition, the Third Circuit held that the New Jersey Products Liability Act (NJPLA) does not apply to an AI algorithm used by the state to evaluate prisoner candidates for its pretrial release program because an “algorithm” or “formula” does not constitute “tangible personal property” nor is it remotely “analogous to it” and therefore is not a “product” under the NJPLA.[32]
Although courts have traditionally been reluctant to apply product liability theories to software developers, that view may change as AI software becomes more integrated into certain medical AI devices.[33] As more algorithms are designed to automate the performance of clinical diagnostic tasks, the FDA could categorize these technologies as medical devices, either because the algorithms are embodied in traditional medical devices, or classified under the FDA’s Software as a Medical Device guidance.[34] In the context of other technologies, courts have started to examine whether the software was integrated with a physical object, whether the object was mass-produced, or whether it had dangerous potential.[35] For example, two crashes of the Boeing 737 MAX that left no survivors illustrate software design issues that contributed to serious harms.[36] Although the software at issue did not involve AI systems, Boeing may be liable under strict liability and warranty theories because the software was so integrated into the aircraft.[37] Similarly, in the context of autonomous vehicles, courts have determined that AI software embodied in a machine capable of causing great harm is a product.[38] Courts could make that same determination for AI machine vision software used for medical imaging where that AI software is integrated into a tangible machine with cameras and sensors.[39]
Courts may also treat different types of AI systems differently for products liability purposes. Some medical professionals and researchers suggest a classification system based on the AI device’s level of autonomy or system capabilities.[40] Consider AI devices performing simple tasks like automating the patient check-in process at health clinics versus an AI-enabled system used to identify abnormalities in MRI images. The latter system continues to evolve and modify its decision-making process for a more accurate response through self-learning.[41] Should the same theories of liability apply? As AI technology and its applications in medicine continue to develop, practitioners should follow carefully whether courts apply product liability concepts differently to those different classifications.
Test to Determine Whether Healthcare AI Is Defective
In the event a particular healthcare AI is considered a product, the question is then how courts will determine whether it is defective. Strict liability claims involving AI software as a medical device are more likely to be brought under theories of design defect or failure to warn.[42]
Design defects are generally established through one of two tests depending on whether the jurisdiction follows the Restatement (Second) of Torts or the Restatement (Third) of Torts, with minor state-by-state variation: the consumer expectations test from the Restatement (Second) of Torts or the risk-utility test from the Restatement (Third) of Torts.[43] Under the consumer-expectations test, a product is defective in design if the product is in a “defective condition unreasonably dangerous to [a] user or consumer.[44] An “unreasonably dangerous defect” is one that makes a product fail to perform as safely as an ordinary consumer would expect when used in an intended or reasonably foreseeable way.[45] However, the consumer expectations test only applies where the everyday experience of the user permits the conclusion that the product’s design violated minimum safety assumptions.[46] While the California Supreme Court articulated a model of the consumer expectations test for complex products, which specifies that for complex products the consumer’s everyday experience permits a conclusion that the product’s design violated minimum safety assumptions regardless of expert opinions about the merits of the design, that California model is a minority view.[47] The majority of jurisdictions hold that a product is defective if it is dangerous beyond the extent anticipated by an ordinary user without expressly accounting for the product’s complexity.[48] Because medical AI consists of specialized medical devices that incorporate AI-based algorithms, it is therefore unlikely for the consumer expectations to apply.[49] Instead, the risk-utility test is more likely to apply to medical AI.[50]
Under the risk-utility test, a plaintiff must show “the foreseeable risks of harm posed by the product could have been reduced by the adoption of a reasonable alternative design.”[51] The word “reasonable” imports a quantitative cost-benefit analysis into the risk-utility test such that a product is defective in design if the inherent risks in the design outweigh the benefits.[52]
AI tools can be extremely beneficial — particularly in the healthcare context — as they can speed analysis, expand the knowledge base of the provider, and speed the review of vast amounts of data.[53] Medical AI has enormous potential in prognostics, diagnostics, image analysis, resource allocation, and treatment recommendations.[54] Some health diagnostic tools that employ advanced AI systems in development appear capable of performing as well as their human counterparts, and sometimes better.[55] In the future, technological performance is expected to continue to improve, and the number and scope of activities subject to this kind of automation will likely increase.[56]
The benefit of healthcare AI is evident amid the global Coronavirus pandemic, as software developers, scientists from top universities, and companies are teaming up to research whether AI can curb the spread of the disease and help fight against future global health crises.[57] AI software can help predict potential coronavirus outbreaks by learning to flag disease risk and outbreak threats based on personal data, such as medical history, real-time body-temperature readings, current symptom reports, and demographics.[58] Researchers are hopeful AI will help find ways to slow the spread of disease through contact tracing, speed the development of medical treatments, design and repurpose drugs, plan clinical trials, predict the disease’s evolution, judge the value of interventions, improve public health strategies, and find better ways to fight future infections outbreaks.[59]
Design defects under the risk-utility test are typically established through expert testimony, especially in cases involving advanced technology such as AI-based algorithms.[60] In a design defect case involving AI software as a medical device, the plaintiff’s expert would need to establish that the risks of using the AI-based software to diagnose or treat a patient outweighed the benefits.[61] In response, a manufacturer would likely hire an expert to claim that the design pushed the limits of technological feasibility and that the costs of any alternative design far outweighed any minimal safety improvement.[62] Important considerations include the magnitude of the harm posed by the design (how a patient could be harmed if a diagnosis is missed or an incorrect treatment is recommended); the likelihood that such a harm would occur (the algorithm’s error rate); the feasibility of a safer alternative design; and the financial cost of the safer alternative design.[63] The risk-utility test may ultimately hinge on whether the medical AI performed at least as safely as its human counterparts.[64] Even if there were no safer alternative design, the manufacturer would have to demonstrate that it adequately warned of the risks.[65]
In applying either test to medical AI, there will be significant hurdles in applying concepts of foreseeable consumer uses and anticipated harms.[66] The complexity underlying AI systems makes demonstrating the availability of alternative programming designs extremely difficult. Additionally, because the AI systems are designed to interact and evolve with the user over time, it will be difficult to import traditional product liability understandings of foreseeability.[67] Under both theories of design defects, testing, simulations, and field performance data for foreseeable uses and misuses would help demonstrate reasonable safety and the benefits for both the consumer expectations and risk-utility tests.[68]
Potential Sources of Liability
In the context of harms resulting from healthcare AI, adding to the complexity of whether it qualifies as a “product” and what “defect” test applies, there is the issue of who is responsible. There could be several companies involved in the algorithmic design of the product, multiple entities included in the chain of distribution, and different types of AI technologies that complicate traditional applications of product liability.[69] Take, for example, a medical device that utilizes AI software for identifying abnormalities in MRI images. The AI-enabled device is marketed to healthcare providers as a tool for increasing efficiency in interpreting MRIs.[70] The AI algorithm is developed by coders working for a software company. Trainers test and refine the algorithm’s accuracy in detecting abnormalities using millions of pieces of data.[71] A hardware manufacturer then integrates the software into a physical device with scanners and sensors. A radiology clinic implements the device in its practice and a radiologist then uses the device to interpret the MRI of Patient X. There are various potential sources of defects if Patient X is misdiagnosed.
Those involved in developing and training the algorithms could be the source of the defect. If the algorithm developer wrote code that causes the system to misinterpret a type of abnormality, the coder or software company could be liable for negligence or a design defect. If the AI system is trained in a way that makes it better at identifying certain abnormalities but worse at identifying others, the trainers could be subject to claims of design defect for developing a system that evolved in ways that allowed for performance trade-offs.[72]
The product manufacturers are another potential source of harm. If a device is manufactured with a faulty sensor, there could be a claim based on a manufacturing defect. If the system is designed to read images meeting a certain resolution threshold and the product does not contain a warning that lower-resolution images could lead to misdiagnosis, there could be liability based on both negligence and failure to warn.[73] If marketing campaigns alleged the device detects abnormalities more accurately than radiologists and that turns out not to be the case, there could be claims based on breach of warranty theories of liability.
Moreover, the healthcare providers utilizing the product could be the source of liability. If an updated standard of care reflects the use of AI (as discussed later), healthcare facilities and professionals using such devices might have a duty to evaluate algorithms and validate results before implementing the devices into patient care.[74] If the radiologists at the clinic are not properly trained on how to use the device, the clinic could be subject to liability for misdiagnosis. If the radiologist disagrees with the AI-system’s interpretation but follows it anyway, there could be a claim for negligence. With respect to AI devices within a clinical setting — and particularly those that are subject to training — some form of the learned intermediary doctrine may become more relevant.
Negligence: Standard of Care
Liability for medical errors is based on a different standard of care than that of the “reasonable person,” and is instead based on the “reasonable physician.”[75] Any professional is held to a standard of care that, at its most fundamental level, recognizes that the professional will exercise his or her judgment in the performance of the profession.[76] That judgment is exercised in the context of past and ongoing learning, training, experience, and the utilization of existing tools that assist the professional in exercising that judgment.[77] The professional takes the facts and circumstances and weighs various options regarding a course of action. For physicians, the usual standard of care is the reasonable degree of skill, knowledge, and care ordinarily possessed and exercised by members of the medical profession under similar circumstances.[78]
AI systems could affect the basic standard of care for healthcare providers in two ways.[79] First, as with any new technology, the general availability of the AI system may impact the practical application of the standard. If these types of AI systems are available, and they have demonstrated their ability to improve performance, then utilization of the tool may, at some point, constitute the standard of care. On its face, this is no different from any other technology; but as AI systems continue to improve, the imperative for their utilization may increase, and given the general lack of need for close geographic proximity for some tools, the importance of geographic factors will diminish.[80] Second, as an AI system continues to improve, not only may the imperative to use the tool increase, but an imperative for the individual practitioner to defer to the AI system in circumstances normally reserved for human judgment may be created.[81] While this may seem to be just the other side of the coin, this dynamic can create some complexity and may require legal and regulatory evolution.
Additionally, privacy concerns underlie the development of medical AI devices. Medical AI devices use data from a variety of sources: electronic health records; medical literature; clinical trials; insurance claims data; pharmacy records; and information entered by individuals on smartphones and fitness trackers.[82] Privacy is essential in gathering large amounts of healthcare data in both developing algorithms and in sharing that data to oversee the algorithms.[83] This technology comes with added risk to patient privacy and confidentiality.[84] As developing and training machine learning algorithms requires data from multiple sources, that data may then be shared with other healthcare entities for the purpose of evaluation and validation. HIPAA’s Privacy Rule requirements related to the disclosure and use of protected health information by covered entities is a major concern.[85] While the Privacy Rule does not govern de-identified information or data collected by non-covered entities like Google or Apple, the potential for data aggregation and re-identification remains.[86] As more data is shared across a variety of organizations, policies and procedures related to de-identifying and sharing data need to be developed to protect patient privacy.[87] Ensuring that systems are secure against hackers and other non-authorized entries is also a necessity in preventing confidentiality breaches of personal health information.[88]
Other Considerations
Black Box AI
One complication when considering potential sources of defect or causation for liability purposes is how these theories of liability hold up against different types of AI. Can the same theories of causation apply to an algorithm using open source code and to a “black box” algorithm where the reasons behind its conclusions are unknown or undiscoverable? For an open source or transparent algorithm, experts should be able to determine whether a flaw in the code was the source of a defect. Indeed, in a transparent AI program, one would imagine experts and the court could determine the precise cause of the defect. Where an AI-enabled medical device functions in a way that can be traced back to the human programming, design, and knowledge inputs, existing products liability doctrines will be adequate to assign fault and allocate responsibility.[89]
On the other hand, determining cause in the black box AI context, the algorithm cannot demonstrate the path to its conclusion, so the mechanisms behind its recommendations are unknown.[90] The opacity of black box AI stems from deep neural networks modeled after the human brain. The system can take an MRI image and use a neural network trained on a large data set to produce a classification of the abnormality, but the reasoning that led to its conclusion is undiscoverable.[91] Furthermore, in the same way the human brain learns and modifies its decision-making process, self-teaching algorithms can learn and evolve to improve accuracy in ways that are not explainable, even by those that developed and trained the algorithm. This self-learning capability leads to increased autonomy, and the algorithms become less intelligible to developers, trainers, and users with each improvement.[92] The application of existing product liability law becomes more challenging.
For example, the “black box” nature of some AI devices presents particular challenges in the preemption defense.[93] Some argue that although AI medical devices are capable of learning and making improvements over time, which could lead to significant changes from what the FDA initially approved, the Riegel preemption framework still applies.[94] If the FDA approves an algorithm that is constantly changing and self-learning, the FDA is approving the algorithm, not the outcome. If the changes are within the scope of what the FDA approved, the AI device should still be preempted. Others take the position that there can never be preemption of black box AI medical devices.[95]
In addition, the self-learning algorithms present challenges to the concept of foreseeability — how can theories of liability based on foreseeability apply to these technologies when they make decisions that humans cannot comprehend? For instance, with deep learning, which is a category of machine learning processes based on unsupervised learning from data, it may be impossible to understand and foresee an algorithm’s decision-making process.[96] Similarly, AI programs with reinforcement learning capabilities are able to identify, categorize, and incorporate or exclude new data derived from aftermarket data inputs from consumers.[97] Future incorporation of consumer data could lead to unforeseeable risks or injuries outside the control of the original programmers or manufacturers.
Coupled with this challenge in applying traditional products liability theories to medical AI is that with increased autonomous learning capabilities, fewer parties (such as clinicians, healthcare organizations, and AI designers) have agency or control over AI devices. With non-autonomous technologies, human involvement and control over the algorithms lead to clear paths to liability for the individuals and entities that utilize such technologies. For example, in litigation over the safety of surgical robots, the non-autonomous robots are agents or instruments of entities that have legal capacity as individuals, such as surgeons and hospitals, and liability attaches based on some form of agency theory.[98] In contrast, as more autonomous devices become available that can act independently of human instruction and continue to self-learn and evolve, attributing legal liability to individuals and entities for the actions or decisions of autonomous devices becomes more tenuous.[99] And with self-learning AI devices, how does this tenuous connection impact the learned intermediary doctrine? Different gradations of AI could lead to different applications of products liability theories, and defenses, based on a sliding scale of the complexity of the underlying AI systems, autonomy, and self-learning abilities.
Thus, the black box nature of AI systems may pose a challenge for the basic standard.[100] How can we tell if there is such a defect? Should the “trainers” of the AI system be evaluated; and if so, against which standard? Does this standard have a role in evaluating and holding these types of AI systems accountable?[101] On the other hand, the social call for some strict liability standard may be heightened with direct-to-consumer AI products.[102] If the products purport to provide actionable health-related information or advice — particularly outside of a clinical setting — then, the risk of injury may increase.[103]
Patient Informed Consent
As the use of AI systems becomes increasingly common in diagnosing and evaluating patient conditions and treatment recommendations, “informed consent” may become an important feature of the patient-physician relationship.[104] Effective informed consent requires that a patient fully understand the healthcare treatment that he or she is agreeing to undergo.[105] While the requirement of informed consent emanates from the goal of a patient consenting to whatever is done with his or her body, we can easily see its expansion to circumstances in which a physician turns over the professional conclusion and advice to a machine, particularly when the technology remains new and, perhaps, unproven or untrustworthy in the eyes of the general public.[106] Just as a patient may refuse prescribed treatment, should the patient not also be able to refuse diagnosis by a machine?[107] If a patient has a preference to not turn over significant conclusions to an AI system, then what sort of informed consent may be appropriate? Further, if an AI system can quantify its confidence, should the informed consent process be different?[108] If a patient is able to avoid AI diagnosis, should this impact the applicable standard of care and the potential liability of the physician?[109]
Medical Malpractice
The U.S. medical malpractice regime holds physicians to accepted reasonable standards of care and assigns liability when the care provided by physicians constitutes negligence or recklessness.[110] In general, courts apply a national standard (for specialties) or a state-specific standard and evaluate whether a physician’s actions are consistent with the degree of skill, care, and learning that is possessed and exercised by members of the medical profession in good standing.[111] In medical malpractice cases, expert witnesses testify as to what the appropriate standard of care should be.[112]
Although very few cases address directly the issue of malpractice risk associated with the use of AI systems specifically, and case law has not yet established distinct principles for evaluating the medical malpractice standard of care in the context of AI systems, courts generally have applied a standard of care for physicians that is neither static nor rigid.[113] Rather, courts will take into consideration the totality of the circumstances, including the physician’s specialty, resources available to the physician, and the advances of the medical profession at the time of the physician’s alleged negligent act.[114] Courts have recognized that medical standards of care typically evolve over time in response to empirical research, scientific progress, and the development of new technology.[115]
Data Privacy
As AI evolves and accelerates the analysis of personal information, it magnifies the risk of invasions of privacy interests.[116] For instance, the emergence of advanced facial recognition software has prompted cities to ban certain uses of the technology.[117] As Congress works to pass privacy legislation that protects individuals against adverse effects of the use of personal information in AI, much of the debate has centered on algorithmic bias and the potential for algorithms to produce discriminatory results.[118] The current data privacy legislation and Federal Trade Commission enforcement is based on “notice-and-choice” models of consent where consumers have the burden of protecting individual privacy through notifications linked to lengthy privacy policies and terms and conditions.[119] With the use of AI systems, legislators and privacy stakeholders have expressed a desire to shift the burden of protecting individual privacy from consumers to the businesses that collect the personal data, which could lead to liability under products liability theories.[120]
With the increasing prevalence of networks of connected devices, referred to as the Internet of Things (IoT), cyberattacks at the hands of hackers threaten data privacy.[121] Products liability doctrine may be a way to hold manufacturers liable for insecure devices, as manufacturers are in a better position to mitigate the damage of cyberattacks.[122] For instance, a software company could be liable for a design defect where default security settings can be easily accessed by hackers.[123] Consumers could also bring manufacturing or design defect claims for certain coding errors and oversights.[124] If a system was produced with vulnerabilities making it susceptible to a cyberattack, a consumer could allege the manufacturer failed to warn of that danger.[125] Questions on how traditional product liability law will apply to data privacy mirror those in the medical AI context: How do you allocate responsibility for damages? If there is a software failure versus an actual defect in the product, can and should the maker of a product be held liable for the software failure? How will liability be judged at trial if there are no standard safety requirements?[126] While plaintiffs have filed suit involving IoT devices alleging products liability claims, the litigation thus far has focused on standing issues (related to the lack of actual harm posed by vulnerability to future hackers) instead of the product liability issues.[127] While courts have not yet addressed how products liability theories will apply to data privacy concerns, it will likely be up to regulators and legislators to give initial guidance on these issues.
Allocation of Responsibility
Given the black box nature of AI systems and the unique ways they are developed to perform their functions, determining liability will become complicated. As noted earlier, different actors performing different functions (including the AI system, its trainers, and those tasked with maintaining it) may be implicated, and the role and function of the healthcare practitioner may begin to change given the nature of some of these AI systems.[128] The practitioner may take on responsibility for AI system training or testing. Further, the practitioner may be working with data and diagnostic information from tools chosen by the patient and not the physician. Patients may be more empowered to take more control of their healthcare and utilize physicians in different ways.[129] The allocation of responsibility is a difficult question, and one that will likely evolve over time.
Contracting Responsibility
One way to allocate responsibility for AI-caused harm is through contract. Contractual warranties, indemnities, and limitations among entities involved in the design, manufacturing, and implementation of AI systems and devices can reduce uncertainty by explicitly allocating liability.[130] Contractual liability can also solve questions about which law governs and which theories of liability might apply to any future disputes.[131]
Joint and Several/Enterprise Liability
Joint and several or enterprise liability theory is another potential solution for allocating responsibility for AI-caused harm. Under an enterprise theory of liability, where a common enterprise exists, such as developing and manufacturing an AI medical device, each entity within the enterprise may be held jointly and severally liable for the actions of any entity apart of the group.[132] If an AI system causes injury, instead of assigning fault to a specific person or entity, all groups involved in the use and implementation of the AI system should jointly bear some responsibility.[133] In the context of an AI-enabled medical device, the several entities responsible for the development and training of the AI software and the hardware device manufacturers could all share liability. Because an inference of liability would be shared among all relevant parties and no finding of fault is required, this solution avoids some of the challenges posed by black-box AI.[134]
Legislative Regulation
Since the case law surrounding AI in medicine is underdeveloped, legislatures may be in a better position to allocate responsibility and establish public policy through lawmaking.[135] As AI technology and its potential applications in medicine continue to evolve, so too will the existing legislative framework for tort and product liability.[136]
Case Law from Other Industries
Because the case law on AI medical technology is underdeveloped and questions related to the application of traditional theories of products liability remain unanswered, life science leaders can turn to other industries.[137] For example, autonomous vehicles in the transportation industry may offer potential insights on what to expect when healthcare AI and product liability law collide. Various lawsuits have been filed against autonomous vehicle manufacturers alleging products liability claims.[138] The first autonomous vehicle lawsuit in California, Nilsson v. General Motors LLC,[139] arose from an accident in 2017 where a motorcycle collided with a vehicle in self-driving mode.[140] In Huang v. Tesla Inc., a Tesla owner died after the vehicle collided with a concrete median the vehicle failed to detect. The lawsuit alleged product liability theories of design defect, failure to warn, intentional and negligent misrepresentation, and false advertising.[141]
Despite predictions that autonomous vehicles will change car accident litigation by shifting liability from vehicle owners and drivers to manufacturers under product liability law, that change has been slow.[142] Manufacturer defendants in recent lawsuits claimed that human error contributed to the accidents involving semi-autonomous systems, which suggests that fault by human parties is still a feature of the existing autonomous vehicle litigation.[143] Because these suits have settled or are still pending, the application of product liability theories remains to be seen.
Another feature of autonomous vehicle litigation is identifying the relevant manufacturer. Similar to medical AI devices, autonomous vehicle systems are comprised of components like processors, sensors, mapping systems, and software that companies like Tesla outsource to third-party manufacturers and developers. While recent plaintiffs have named several co-defendants in autonomous vehicle litigation, this adds complexity to determining the source of the defect and causation.[144] A further complication is that autonomous vehicle manufacturers have claimed that data associated with the crash is proprietary. In a lawsuit related to a 2019 accident involving Tesla’s autopilot system, the driver has been unable to obtain data from the car’s black box where that data was not recorded, but instead sent over the airwaves to Tesla’s remote cloud computer repository.[145]
While the case law on autonomous vehicles is still underdeveloped, parallel considerations in the regulatory and legislative context are informative for the healthcare sphere. The U.S. Department of Transportation through the National Highway Traffic Safety Administration (NHTSA) partners with state and local governments to regulate vehicles and set safety standards. In 2016, the NHTSA published the first policy guidance on automated vehicles to offer uniform safety regulations and discuss relevant regulatory tools.[146] Industry experts have been able to weigh in on the effort to regulate autonomous vehicles. For example, the Society of Automotive Engineers created a six-level scale that the NHTSA adopted. The scale classifies autonomy from zero to five, with zero being entirely human-controlled and five requiring no human input.[147] The degree of vehicle autonomy will be a relevant factor in future regulation, and will likely be an important consideration in the healthcare context.
Federal oversight related to autonomous vehicles has been of considerable interest to Congress.[148] While there are currently no specific regulations at the federal level, since 2012, at least 41 states have considered legislation related to autonomous vehicles and 29 states have enacted legislation.[149] Various jurisdictions have considered different approaches in assigning liability to autonomous vehicles.[150] One solution is the creation of a compulsory no-fault quasi-insurance system in which the victim of a self-driving car accident receives a payment without the need to prove why the vehicle caused the harm.[151] Such a system would be funded by a surtax on autonomous vehicles.[152] The autonomous vehicle industry therefore suggests that regulation related to healthcare AI will likely be addressed at the state level following guidance from federal regulatory agencies.
Conclusion
As AI systems continue to develop, improve, and become more ubiquitous, driven perhaps by consumer demand or utilization of even more sophisticated tools, the relationships among physicians, technology, and consumers need to evolve as well. The allocation of responsibility within, and the way we regulate, this evolving continuum of healthcare is not clear given the profound capabilities of AI systems that are realistically imaginable and, in some instances, just around the corner.[153] While healthcare AI raises some unique risk and allocation issues, there is no reason to believe current products liability doctrine cannot adapt to allow courts, industry, and consumers to resolve and manage disputes in this evolving area.
*This article is adapted from the ABA Health Law Section’s new book, Bringing Medical Devices to Market, edited by Charlene Cho. This book guides readers through the process of bringing a new medical device from proof of concept to the market, covering topics such as regulation, clinical trials, intellectual property protections, and coding/reimbursement. For more information, go to www.shopABA.org.