August 20, 2019

AI Product Liability Issues and Associated Risk Management

Stephen S. Wu

The following is excerpted from Chapter 16 of Law of Artificial Intelligence and Smart Machines: Understanding A.I. and the Legal Impact.

Robots, and autonomous vehicles (AVs) in particular, act in the physical world.  Accidents involving these systems are inevitable.  Some of these accidents will cause catastrophic injury for those involved in the accident.  Even worse, if a defect or cyber attack could compromise every instance of a particular robot or an entire network, fleet, or industry, the defect or attack could cause widespread simultaneous accidents throughout the country or even the world.  Imagine, for instance, a future in which regional transportation centers in metropolitan centers control the dispatch and navigation of AVs in the region.  Imagine further that a sudden defect causes all the AVs under control of the system to crash all at once in a major metropolitan area like New York.  The impact of such an event in terms of harm, property damage, injury, and deaths could easily exceed an event like the attacks on September 11, 2001.

In 2012, I had the opportunity to speak at the Driverless Car Summit presented by the Association of Unmanned Vehicle Systems International.  The conference organizers polled the audience which, although admittedly unscientific, did provide a data point about industry views on product liability.  One polling question asked attendees to identify the chief obstacle to the deployment of AVs, and the top answer was “legal issues.”  The proceedings of the conference identified this issue as well.[1]  Although the poll did not break down the issues among compliance and liability, I suspect that liability is the larger perceived issue.  Indeed, some people have identified product liability suits are an existential threat to autonomous driving.[2]

In the worst-case scenario for the industry, manufacturers could face numerous suits that force some of them to exit the robotics market and cause others to decide not to enter the market in the first place.  They could perceive that the sales are not worth the risk.  Such an outcome could be tragic if it results in manufacturers not bringing otherwise life-saving and socially beneficial robots to the market.  Manufacturers, however, can implement practices to minimize the likelihood, frequency, and magnitude of accidents, and thereby control the risk of liability.  By implementing these practices, manufacturers can maintain the profitability they would need to offer robots in the market.

Managing the Risk of Robot Product Liability

Given the large human and financial consequences of defective products, manufacturers seek to manage the risk of product liability litigation and costly recalls.  What can a robot, AV, or AI system manufacturer do to reduce the likelihood of company-ending product liability litigation?  Most importantly, if manufacturers can proactively prevent defects and resulting accidents from occurring in the first place, they can prevent the need to defend product liability claims.  Planning for improved safety can enable manufacturers to make safer products that are less likely to cause accidents and trigger product suits.

Of course, accidents may occur anyway and with any widely-deployed robot, AV, or AI system, a manufacturer can foresee that accidents are inevitable.  Nonetheless, a proactive approach to risk management would permit a manufacturer to put itself in the best position possible to prevail in product liability cases based on the inevitable accidents.  A proactive approach to design safety means that the manufacturer takes the steps today to implement a commitment to safety, which will minimize its risk from future suits.  History shows that juror anger fuels outsize verdicts.  If a proactive manufacturer takes the concrete and effective steps to implement a commitment to safety, it will be able to tell a future jury why its products were safe and how it truly cared about safety.  Such actions will place the manufacturer in the best possible light when, despite all these safety measures, an accident does occur.

Making the commitment to safety upfront is crucial.  As one commentator stated, “The most effective way for [counsel for] a corporate defendant to reduce anger toward his or her client is to show all the ways that the client went beyond what was required by the law
or industry practice.”[3]  Going beyond minimum standards is important because, first, juries may look at minimum standards skeptically, thinking that the industry set the bar too low.  Moreover, juries expect that manufacturers know more about their product than any ordinary “reasonable person,” which is the standard for judging a defendant in a negligence action.  Juries expect more from manufacturers.  “A successful defense can also be supported by walking jurors through the relevant manufacturing or decision-making process, showing all of the testing, checking, and follow-up actions that were included.  Jurors who have no familiarity with complex business processes are often impressed with all of the thought that went into the process and all of the precautions that were taken.”[4]  The most important thing to a jury is that the manufacturer tried hard to do the right thing.[5]  Accordingly, a manufacturer that goes above and beyond minimum industry standards is in the best position to minimize the likelihood of juror anger and minimize possible product liability risk.

Any proactive approach to product safety should begin with a thorough risk analysis.  A risk analysis would look at the types of problems that could arise with a product, how likely these problems could occur, and the likely frequency and impact of these problems.  After completing this analysis, a manufacturer can analyze its robot or AI product design in light of the risks.  It can change design and engineering practices to address potential issues and prioritize risk mitigation measures based on what it sees as the most significant risks.  In implementing this risk management process, a manufacturer may obtain guidance from a number of standards relevant to robots and AI systems.  In the field of AVs, examples include:

  • ISO 31000 “Risk management – Guidelines” (regarding the risk management process).
  • Software development guidelines from the Motor Industry Software Reliability Association.
  • IEC 61508 Functional safety of electrical/electronic/programmable electronic safety-related systems (safety standard for electronic systems and software).
  • ISO 26262 family of “Functional Safety” standards implementing IEC 61508 for the functional safety of electronic systems and software for autos.

Adherence to international standards may not insulate a manufacturer from liability, whether in front of a jury or as a matter of law.  Nonetheless, following international standards increases the credibility of a manufacturer’s risk management program.  Also, following standards helps a manufacturer create a framework of controls for its risk management process.  Such a framework would make implementation and assessment easier.  Therefore, organizing a risk management program based on the methods specified in international standards provides an important basis for defending later product liability litigation.

In addition to adhering to international standards, insurance will play an important role in managing robot and AI product liability risk.  Insurance functions to shift product liability risk to insurance carriers.  In exchange for paying a premium, a manufacturer’s insurance carriers will defend and indemnify manufacturers for losses and pay for settlements or judgments to resolve third party claims.  The insurance industry is in the early stages of understanding robot and AI risk and creating coverage that effectively manages risk.[6]  As businesses and consumers deploy robots and AI systems more broadly, insurers will create insurance programs for third party accident and liability risks.  Some of those risks may include privacy and security breaches.  One barrier to effective insurance programs is the lack of loss experience data to assist in the underwriting process.  To start writing policies for given robots or AI systems, however, insurance carriers are likely to look at analogous conventional products.[7]  In the short run, manufacturers may need to tailor-make insurance coverage with bespoke policies that fit their risk profiles.  Over time, carriers will enter the market and create standard policies, reducing premium costs over the longer run.

Beyond the most immediate internal safe design steps and insurance programs, manufacturers of a given type of robot or AI system may be able to act jointly to mitigate risk to the entire industry sector (subject to possible antitrust issues involving joint action).  For instance, they may work on safety and information security standards to promote safe practices within the industry sector.  Trade groups and purchasing consortia can help manufacturers promote the safety among component manufacturers.  Finally, an industry sector may want to create and maintain information sharing groups to develop and promote safety practices among industry participants.

During the design process, effective records and information management (RIM) will help a manufacturer document and evidence its commitment to safety.  Documents generated contemporaneously with the design process can memorialize a manufacturer’s safety program and the steps it takes to fulfill its commitment to safety.  In any product liability suit, a witness could certainly testify about the manufacturer’s safety program.  Nonetheless, without corroborating contemporaneously recorded documentation, there is a risk that the jury would find any such testimony to be self-serving and thus disbelieve it.  In this vein, wholesale destruction of all design documents of a certain age may be as bad as retaining too many documents.  Archiving the right documents in preparation of future litigation will help the business defend itself in the future.  Effective RIM may win cases, while poor RIM may lose cases.

Finally, some pre-litigation strategies may further reduce product liability risks.  For example, manufacturers can work with jury consultants to advise the manufacturer in the defense of a product liability case.  They can focus on ways the manufacturer can place its safety program in the best light to avoid impressions that would anger a jury.  Moreover, a manufacturer may want to create a network of defense experts familiar with their robotics or AI technologies.  These experts can help educate jurors about various engineering, information technology, and safety considerations.  Further, attorneys representing AI and robotics manufacturers may work within existing bar groups or form new ones to share specialized knowledge, sample briefs, case developments, and other information helpful to the defense of product liability cases.

[1] E.g., Autonomous Solutions Inc., 5 Key Takeaways From AUVSI’s Driverless Car Summit 2012 (Jul. 12, 2012) (“Some of the largest obstacles to autonomous consumer vehicles are the legalities.”).  Reports from Lloyd’s of London and the University of Texas listed product liability as among the top obstacles for AVs.  Lloyd’s, Autonomous Vehicles Handing Over Control:  Opportunities and Risks for Insurance 8 (2014) [hereinafter, “Lloyd’s Paper”]; University of Texas, Autonomous Vehicles in Texas 5 (2014).

[2] See, e.g., Tim Worstall, When Should Your Driverless Car From Google Be Allowed To Kill You?, Forbes, Jun. 18, 2014 (“the worst outcome would be that said liability isn’t sorted out so that we never do get the mass manufacturing and adoption of driverless cars”),

[3] Robert D. Minick & Dorothy K. Kagehiro, Understanding Juror Emotions:  Anger Management in the Courtroom, For the Defense, July 2004, at 2 (emphasis added),

[4] Id.

[5] See id.

[6] See generally Lloyd’s Paper, supra note 1.

[7] Cf. David Beyer et al., Risk Product Liability Trends, Triggers, and Insurance in Commercial Aerial Robots 20 (Apr. 5, 2014) (describing the development of insurance coverage for drones), available at

Stephen S. Wu

Shareholder, Silicon Valley Law Group

Stephen Wu is a shareholder with Silicon Valley Law Group.  He advises clients concerning cutting edge information technologies, such as artificial intelligence, automated vehicles, robotics, mobile computing, cloud computing, Big Data, human-computer interfaces, and the Internet of Things.  He helps clients with transactions, compliance, liability, investigations, and information governance regarding these technologies.  His work includes establishing data security, privacy, and records management policies and programs.  Steve counsels clients on compliance with the European Union’s General Data Protection Regulation and the California Consumer Privacy Act.  He assists with litigation in technology, trade secret, and copyright cases.  Finally, Steve acts as outside general counsel to Silicon Valley startups and technology companies, drafting and negotiating software and AI as a service agreements, licenses, HIPAA business associate agreements, marketing agreements, and other technology transactions.