August 20, 2019 Articles

Navigating Liability Complaints Related to Automated Vehicles

Addressing design complaints about vehicle automation requires a robust assessment of the underlying scientific evidence.

By John L. Campbell

Driver-assistance systems have evolved from aiding the driver in very specific situations (e.g., electronic stability control) to supporting a broader range of driving tasks (e.g., adaptive cruise control and lane-keeping systems) and even to conditional automation (e.g., driverless shuttles and sustained highway driving). While recent demonstrations of automated vehicles (AVs) support the possibility that even conditional automation can provide safety benefits compared to fully manual driving, many driver behavior challenges remain. For example, while automated vehicle designers may assume that drivers will trust and understand these vehicles and perform their allocated monitoring roles, recent on-road incidents involving fatalities (in Florida, Arizona, and California) have undercut such assumptions and remind us that AVs can change the nature of driving in unanticipated and complex ways.

Surveys on driver acceptance of AVs conducted in the wake of these incidents suggest that the public’s tolerance for crashes and fatalities in AVs is much lower than that for manual driving (see, e.g., AAA, 2018). This intolerance highlights the importance of preparing for the inevitable litigation that will follow the broader implementation of AVs on our streets and highways.

Against this backdrop, it is important to examine not only driver behavior challenges relating to AVs that might be the basis for products liability lawsuits but also strategies to use when reviewing scientific research to respond to those claims.

Human Factors: Challenges Associated with AVs

If you litigate automotive claims involving human factors or driver behavior, you are likely familiar with topics such as perception-reaction time, design of warning information, driver distraction, and the effects of aging on driver performance. While such topics will remain important as long as drivers are behind the wheel, they may be overshadowed by an emerging set of human factors challenges and driver behavior challenges introduced by vehicle automation. Central to these emerging challenges is that partial automation (i.e., anything short of a Level 5 driving automation system that can independently drive the vehicle under all conditions) is imperfect and can act and fail in unpredictable ways and at unpredictable times. Partial automation can lead to expectation mismatches that result in drivers undertrusting or overtrusting the system, misunderstanding how the automation operates, or not knowing “who’s in charge” at a given point in time. Such mismatches can lead to driver errors and contribute to crashes. These new challenges are summarized below.

Human Factors: Trust in Automation

A driver’s confidence in the vehicle’s ability to perform all or part of the driving task and willingness to rely on information provided by the AV are crucial to vehicle performance and safety.

However, drivers have been very cautious in their acceptance of vehicle automation. Some research suggests that undertrust is influencing consumers’ interest in buying and using AVs. For example, the American Automobile Association has conducted yearly surveys on consumer acceptance of AVs for the past five years, and the percentage of drivers who report that they are afraid to ride in a fully self-driving vehicle has never been less than 60 percent. The Insurance Institute for Highway Safety discovered that lane departure warning (LDW) systems had been turned off in more than 67 percent of the vehicles they examined. LDW systems have clear potential for reducing crashes, but if drivers turn them off, we’re not going to see these safety benefits.

Overtrust may be an even greater concern: having too much trust in the system can lead to poor oversight of the vehicle’s functions. Research has shown that even though drivers are supposed to be monitoring the road, they show a greater willingness to engage in secondary tasks such as reading, eating, watching movies, texting, and even sleeping during some form of automated driving (see, e.g., articles by Jamson et al. (2013) and Winter et al. (2014)). After even brief exposures to automation, we have also seen an increase in drivers’ willingness to neglect safe driving practices. For example, a study conducted by the University of Iowa found that 30 percent of drivers with a blind spot monitoring (BSM) system reported at least sometimes changing lanes without visually checking their blind spot. In the same study, 25 percent of drivers with a rear alert system reported at least sometimes backing up without looking over their shoulder. Overtrust can lead to acts of commission and omission that reduce safety.

Human Factors: Understanding AVs

Understanding in this context refers to the AV’s purpose and actions: what the vehicle can do and cannot do, how it works, and how it is likely to work in the future.

Understanding complex technology like AVs is crucial to successful driving performance. When drivers understand a system, it improves their performance with the system, trust in the system, and ability to identify errors or problems when the system is functioning. As noted in design guidelines for AVs developed by the National Highway Traffic Safety Administration (NHTSA), success for AVs requires that drivers develop and maintain a functionally accurate understanding of how the system operates. Historically, drivers have misunderstood new automotive technologies (e.g., anti-lock brakes) when they are introduced, and even for some time after that.

Understanding the capabilities and limitations of driver-assistance technologies has proven to be a challenge for some drivers, though. In the same University of Iowa study cited above, researchers found that only 21 percent of owners correctly identified a BSM system’s inability to detect vehicles passing at very high speeds, and that 33 percent of owners of vehicles with automatic emergency braking (AEB) systems did not realize that the system relied on cameras or sensors that could be blocked by dirt, ice, or snow.

The challenge for AV developers is that if drivers do not understand the technology, they may either not use the technology or—worse—misuse it. Misunderstanding an AV’s capabilities can have real-world consequences. For example, a lack of understanding of the vehicle’s capabilities and limitations was a contributing factor in a crash between a car operating with an automated vehicle control system and a tractor-semitrailer in Florida in 2016.

Human Factors: Driver Engagement

As the driving automation system provides greater levels of driving functionality (e.g., providing both steering and braking capability for extended periods of time, such as the Tesla Autopilot feature), driver behavior issues become more significant because of the ways in which these systems change the roles and responsibilities of the driver.

Higher levels of vehicle automation can change the driver’s role from being an active operator of the vehicle to being a passive supervisor. This provides potential benefits: driver workload can decrease, and drivers can use the time to make phone calls or engage passengers in conversations.

However, higher levels of vehicle automation may have costs as well: when drivers are removed from the active control loop, they can neglect their monitoring responsibilities, disengage from the driving task, stop looking at the roadway, or engage in other activities. The driver can be situationally unaware and may not be prepared to intervene and provide a timely and effective response to a takeover request or to an emerging roadway hazard. A willingness on the part of a driver to engage in distracting secondary tasks and neglect roadway-monitoring responsibilities was a contributing factor in a crash between a car operating with an automated vehicle control system and a pedestrian in Arizona in 2018.

Overall, these emerging human factors challenges are real and could be implicated—directly or indirectly—as contributing factors in individual crashes involving AVs.

Assessing the Scientific Evidence to Support Litigation

Crashes involving AVs could result in liability claims that allege design defects or a failure to warn and could include crash-causation theories that reflect driver behavior issues and human factors issues. In response, human factors experts may be asked to review the information obtained during discovery, conduct integrative reviews of relevant scientific research, and provide opinions regarding the design of the AV and ways that driver perceptions, inattention, expectations, and behaviors might have contributed to the crash.

Data sources for the integrative reviews are obtained from journal articles, books, handbooks, design standards, industry reports, past literature reviews, and government documents. The greatest advantage of integrative reviews over primary research is that a multisource review can provide information that is not available in any single data source by capitalizing on study-level variation and aggregating results across studies. Such reviews can assess whether the AV involved in the crash was designed in accordance with available criteria and whether the driver’s behaviors were reasonable relative to those of a typical driver.

However, because AVs are a relatively recent development, such claims can be difficult to address because (1) we do not yet have a body of well-accepted scientific research on many driver behavior issues related to AVs, (2) few human factors design topics have been codified as standards or even best practices within the industry, and (3) the broader state of AV research methods is still somewhat immature. For example, research may not accurately reflect the driver’s new role, may inappropriately compare manual and automated driving, may incorporate inadequate testing methods, or may utilize outdated implementations of AV functionality.

Despite these uncertainties, it is possible use existing research to develop effective analyses and compelling opinions that assess system design and evaluate driver behavior in AVs. In this emerging technical area, it is essential to critically assess specific attributes of individual research studies such as applicability, quality, and credibility. While assessing these attributes necessarily involves expert judgement, some general heuristics that may be helpful to those involved in AV litigation are summarized below.

Applicability: Is the Research Relevant?

Applicability refers to the extent to which the results and conclusions provided by a research study can be generalized to the specific set of crash circumstances involved in your litigation. In the context of assessing original research, this often refers to external validity, i.e., the degree to which research findings can be applied to other situations and people. This includes the representativeness of the test stimuli, test environment, research tasks, and experimental subjects themselves. A key issue to consider is the intended application area. That is, how and where did the authors or sponsors of the original study intend for the findings to be applied?

Because there is a paucity of directly relevant research on driver behavior in AVs, human factors issues are often discussed within the larger context of automation research conducted in comparable domains, such as aviation and military applications. Many of the general concerns about AVs expressed in the popular press and the scientific literature originate from “lessons learned” from these other domains. While these general concerns are often perfectly reasonable, you should consider their applicability to the case at hand. For example, there are many differences between the motivation, training, and physical capabilities of military pilots and the general population of automobile drivers. Could the results of a particular study that highlights some shortcomings of automation be limited to the specific characteristics of the participants in that study? Can the results of research involving military personnel be reasonably generalized to the participants in the crash under investigation? In general, are findings from comparable domains relevant and applicable to your litigation?

Quality: Is the Research Rigorous and Valid?

Quality refers to the judged rigor and validity of a data source. Assessing the quality of data sources should reflect standard methodologies for reviewing and evaluating research in the social sciences (see, e.g., books by Cooper (1989), Light and Pillemer (1984), and Lipsey (1990).

Specific experimental design characteristics of research that should be examined closely include the following:

  • Independent variables (e.g., type of automation, implementation of secondary task, level of training, driver age and/or gender) and the rationale for selecting them
  • Dependent measures used in the research (e.g., reaction time, accuracy rates, error rates, driving performance, driver acceptance preference ratings, and workload metrics)
  • The general setting of the study (e.g., laboratory studies, simulation studies, in-vehicle/field studies, and test track studies) and its appropriateness for evaluating the underlying research questions
  • Data-analysis techniques

As noted above, driver behavior research for AVs very much reflects the emerging nature of the technology itself and may be poorly conceptualized or executed. This lack of maturity within the field suggests that the quality of research being considered for inclusion in an expert opinion be examined carefully. Quality assessments can include documenting objective characteristics of a data source to assess experimental methods and identifying possible threats to validity.

Credibility: Is the Source of the Research Trustworthy?

As used here, credibility refers to the standing or level of authority associated with the source of the research. Although credibility may not be as crucial to evaluating scientific research as quality or applicability, it should be considered as part of a broader assessment of an individual data source. In general, higher levels of credibility are associated with research and AV design guidance produced by the following:

Related to credibility is consistency—that is, the degree to which the empirical results or conclusions presented in a data source agree with those specified in other, comparable sources or with established theoretical models. When used as a criterion to evaluate the value of a data source, consistency must be used carefully. Research studies that vary even slightly in their tasks, stimuli, variables, subject demographics, etc., will often yield different results. In general, however, data sources that are very similar in their purposes and methods should report findings or provide recommendations that are relatively consistent.

Summary

For legal counsel representing automakers, technology suppliers, and others involved in implementing vehicle automation, the inevitable litigation associated with AVs will include a mix of “something old and something new.” Familiar human factors topics relevant to driving and crashes (e.g., perception-reaction time, warnings, distraction, etc.) will still play a significant role. New, however, will be some interesting and complex challenges introduced by automation and the ways that automation can change driver expectations: driver trust, driver understanding, and drivers’ vigilance and monitoring behaviors.

In the AV domain, there is a lack of codified standards and best practices to aid responses to litigation claims. This highlights the vital need for rigorous assessments of the applicability, quality, and credibility of data sources that can be used to respond to products liability claims involving driver behavior and human factors.

Despite the challenges, valuable insights and compelling opinions on the underlying causes of crashes involving AVs can be developed through human factors analyses of the crash. Such analyses will benefit from integrative reviews of the scientific literature that apply rigorous standards to the initial selection of data sources.

John L. Campbell is a senior managing scientist at Exponent in Seattle, Washington.


Copyright © 2019, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).