chevron-down Created with Sketch Beta.

Jurimetrics Journal

Jurimetrics: Fall 2023

The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities

William Widen and Philip Koopman

Summary

  • The proposed legal architecture uses four different modes of operation for a driving automation system.
  • Questions of liability attribution and allocation for driving automation systems need urgent legislative answers as manufacturers test and deploy on highways and roads and incidents of driving automation system failures continue to pile up.
  • Most, if not all, state regulations that address automated vehicles are currently keyed to the “levels” in SAE J3016, which range from 0 to 5.
  • Capturing and preserving crash data will require a more nuanced approach than current event data recorder mechanisms.
The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities
Westend61 via Getty Images

Jump to:

Abstract: This Article proposes an architecture of concepts and language for use in a state statute that establishes when a human occupant of an automated vehicle (AV) has contributory negligence for her interactions with a driving automation system. Existing law provides an insufficient basis for addressing the question of liability because a driv­ing automation system intentionally places some burden for safe operation of an AV on a human driver. Without further statutory guidance, leaving resolution to the courts will likely significantly delay legal certainty by creating inefficient and potentially incon­sistent results across jurisdictions because of the technological complexity of the area. To provide legal certainty, the approach recommended uses four operational modes: test­ing, autonomous, supervisory, and conventional. Transition rules for transfer of respon­sibility from machine to human clarify at what times a computer driver or human driver has primary responsibility for avoiding or mitigating harm. Importantly, specifying clear parameters for a finding of contributory negligence prevents the complexity of ma­chine/human interactions from creating an overbroad liability shield. Such a shield could deprive deserving plaintiffs of appropriate recoveries when a computer driver exhibits behavior that would be negligent if a human driver were to drive in a similar manner.

Citation: William H. Widen & Philip Koopman, The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities, 64 Jurimetrics J. 41–78 (2023).

This Article proposes an architecture of concepts and language for use in a state statute, which establishes parameters to decide when a human occupant of an automated vehicle (AV) has contributory negligence for her interactions with a driving automation system. A law that clearly sets forth the behavior reasonably expected of a human occupant in her interactions with a driving automation system will reduce uncertainty of outcomes and promote judicial economy by setting boundaries on the determination of contributory negligence and comparative fault. The need to address this type of liability question is no secret and has lurked in the background of academic discussion for over a dec­ade.

The question of when a human occupant has contributory negligence for interactions with a driving automation system arises in the larger context of when, and to what extent, a driving automation system itself has responsibility for accidents resulting from its behavior and when a legal person has responsi­bility for shortcomings in the performance of a “computer driver.” The proposed architecture for a statute assigns liability for deficient performance of a com­puter driver to its manufacturer to create a financial incentive for manufacturers to produce safe driving automation systems. Assigning responsibility to a “non-legal-person” driving automation system (which relies on artificial intelligence systems) and covering losses solely with insurance (whether purchased by a manufacturer or an owner) does not create the desired financial incentives to create a safer product. An owner of a vehicle for personal use has no ability to improve safety of a driving automation system and even if a manufacturer was required to maintain insurance, incentives to improve safety would come indi­rectly and incompletely by payment of premiums.

The legal architecture proposed in this Article uses four different modes of operation for a driving automation system: (1) testing mode; (2) autonomous mode; (3) supervisory mode; and (4) conventional mode. The demarcation of driving modes is particularly important to answer questions about possible con­tributory negligence and guiding comparative fault calculations. We reasonably expect different degrees of human oversight of, and intervention in, the opera­tion of an AV depending on the design of the driving automation system and the mode in which the vehicle is operating at the time of—and in the time immedi­ately preceding—any accident, collision, or other incident.

We believe the certainty provided by a statute is preferable to leaving the courts to delineate the human occupant’s duties with respect to her interactions with a driving automation system in most cases because it would take courts a long time to develop appropriate parameters and the parameters may develop inconsistently in different jurisdictions. Though the approach provides the cer­tainty and predictability of rules, it leaves room for application of a more flexi­ble “standards” approach in appropriate circumstances.

For an example of potential inconsistency causing concerns, General Mo­tors admitted in a responsive pleading that its automated vehicles owed a duty of care to road users. On the other hand, Mercedes-Benz so far has refused to accept a duty of care for its vehicle automation systems, instead urging reliance on existing laws governing design defects. The design defect approach is com­pletely unworkable due to the lack of experts qualified to serve as witnesses and the difficulties of demonstrating causation in machine learning systems using neural networks.

For ease of reference, our statutory architecture uses the concept of a com­puter driver that we developed in a prior essay—Winning the Imitation Game. Our suggested driving modes might, however, be used independently of our rec­ommendations in that essay.

A computer driver is “a set of computer hardware, software, sensor, and actuator equipment that is collectively capable of [s]teering a vehicle on a sus­tained basis without continual directional input from a [h]uman [d]river.” This formulation of the computer driver concept has a wider scope than the term au­tomated driving system (ADS) defined by SAE J3016. The term ADS is nar­rower because that term is limited in applicability to its defined levels 3, 4, and 5. A level 2 system could fall under the concept of computer driver if it had a feature that controlled steering on a sustained basis.

A category with a broader scope than the familiar levels found in SAE J3016 is needed for appropriate attribution of liability given the risks posed by control of steering on a sustained basis which occurs in both SAE Level 2 and 3 features. Moreover, as currently drafted, incorporation of the SAE J3016 levels into AV laws and regulations permits manufacturers to manipulate the definitions to avoid regulation.

In Winning the Imitation Game, we suggested that the law define the cate­gory of computer driver and provide for the possibility of a negligent computer driver. Under this proposal, the manufacturer would have liability if the com­puter driver did not mimic or exceed the ability to mitigate or avoid harm to road users that the law demands of human drivers in any given situation. The law should provide a clear avenue for pursuit of a negligence claim against a negligent computer driver (without a claim for defective product design) be­cause, among other reasons, proof of a product liability claim is complex and may be hampered by difficulty in getting access to technical information such as source code. In discovery, the plaintiff generally has the burden to demon­strate a need to inspect source code. There is no presumption of access based on the inscrutable “black box” nature of software. Some federal district courts have default rules for handling disclosure of source code. However, a defend­ant may argue that the default rule for source code should not apply. Examina­tion of expert witnesses who form opinions based on a review of source code will be subject to challenge under the factors described in Daubert v. Merrell Dow Pharm., Inc.

The questions of liability attribution and allocation for driving automation systems urgently need legislative answers because incidents of driving automa­tion system failures continue to pile up as manufacturers test and deploy on our highways and roads. For example, Mercedes Benz planned deployment of Level 3 vehicles in Nevada in late 2023 but that deployment appears to have been delayed. Cruise and Waymo obtained permission to expand testing of Level 4 robotaxis from San Francisco to operate throughout the state of Cali­fornia but shortly thereafter Cruise had its approval withdrawn following a serious injury to a pedestrian. Arizona has an extensive program for testing vehicle automation technology and was the location of the first fatality during testing.

The need for new legislation should surprise no one because the law often requires a statutory fix to address changes in technology for which existing law understandably fails to provide a clear answer. Legal uncertainty inheres in any exercise trying to predict how courts will apply existing tort principles and rules to emerging and advanced technologies such as driving automation sys­tems.

This Article proceeds by first providing a graphical introduction to our four driving modes. It then explains how these modes integrate into existing law and why they are needed, giving many examples of accident scenarios where the modes help a court produce a just result in a cost-effective way.

I. Outline of the Driving Modes

Briefly, the four driving modes are as follows:

· Testing: A human test driver oversees test vehicle safety.
· Autonomous: There is no human driver involvement required to operate the vehicle.
· Supervisory: A human driver oversees a computer that exerts sustained control over vehicle motion.
· Conventional: A human driver is primarily responsible for at least sus­tained vehicle steering.

Figure 1 provides a high-level overview of these modes. Note that sustained control of steering is used as a practical and simplifying decision threshold to differentiate whether the computer driver is engaged at any given time. It is ex­pected that the computer driver will likely also concurrently control other as­pects of vehicle motion such as speed control (i.e., acceleration and braking) as a practical matter. As a general rule, the computer driver has liability both during testing mode operation and in all other cases when it is engaged. Liability ex­tends for a short time period after disengagement to allow for a proper human driver takeover of driving responsibilities to account for potential automation complacency. Extending liability for this short time period provides an incentive for manufacturers to include effective driver monitoring features and to include appropriate driving behaviors in case a human driver ignores a request to take over control.

 

Figure 1. Automated Vehicle Operational Modes

Driving Modes

Driving Modes

Driving Modes

Testing:

  • Prototype Computer Driver does the driving;
  • Human Driver mitigates dangerous behavior to degree practical

Autonomous mode:

  • Computer Driver does the driving;
  • No Human Driver required

Supervisory mode:

  • Computer Driver steers;
  • Human Driver intervenes if necessary due to Computer Driver limitations;
  • Vehicles may permit hands-off driving

Conventional mode:

  • Human Driver steers;
  • Driver Assistance features might be active but do not provide sustained automated steering.

A. Testing Mode

In testing mode, a J3018 safety driver also may have liability for derelic­tion of duty as in the case of the Uber fatality in Tempe, Arizona. Safety driver fault, however, should not absolve a manufacturer of liability by using the hu­man driver as a scapegoat. Using the human driver as a scapegoat creates a “moral crumple zone” which masks shortcomings in technology.

B. Autonomous Mode

The computer driver in an AV operating in autonomous mode generally has liability because such systems by design have no expectation of a human driver intervening to mitigate risk. (If such an expectation were present, the vehicle would be operating in supervisory mode under our proposed structure.) As part of the value proposition of an autonomous vehicle, occupants would reasonably expect to be able to engage with entertainment media, devote their attention to a remote business meeting, take a nap, or read a book. Automated trucks and other autonomous cargo vehicles, might not require any human present in the vehicle. Even for passenger vehicles, any people in the vehicle might not be qualified to drive due to age, physical condition, or lack of a required license.

C. Supervisory Mode

Supervisory mode is the most complex and can be thought of as a type of “collaborative driving” that encompasses a wide span of vehicle automation capabilities that involve automation of a substantial portion of the driving bur­den, including at least sustained vehicle steering.

Enabling a supervisory mode feature creates an awkward middle of shared driving responsibilities, which may vary over the course of an itinerary. An au­tomation feature in this awkward middle requires that a human driver remain attentive to some defined degree based on the design concept of the feature. Additionally, the human driver is supposed to intervene to ensure safe vehicle operation when required to do so, according to some predefined set of expecta­tions. The salient characteristic of such a system is that practical driving safety outcomes depend on a combination of computer driver behaviors and the poten­tial for human driver intervention. Therefore, lacking a bright line set of rules, the degree to which each might have contributed to a mishap can be unclear.

Moreover, different driving automation system designs might require dif­ferent levels of human engagement for safe operation. One vehicle’s design con­cept might require that the human driver continually scan the road for hazards that the computer driver might have missed, leaving the computer driver to han­dle mundane lane-keeping and speed-control tasks. Another vehicle’s design concept might allow the human driver to watch a movie so long as she can re­spond to a vehicle takeover alarm within a reasonable time. But, across such diverse systems, the central characteristic remains: the computer driver and hu­man driver both contribute to, and have some responsibility for, safety out­comes.

In supervisory mode, the human driver can have some responsibility for mishaps when she unreasonably ignores prompts to stay attentive or unreason­ably fails to take over performance of the driving task in response to a request for takeover made by the computer driver. In some cases, a deficient or unsafe response to a computer driver request for an intervention may constitute human negligence. Human negligence may extend to cases in which a human driver fails to maintain sufficient attention to her surroundings during an itinerary. In other cases, a human occupant may intentionally take a malicious action that proximately causes an accident or collision for which a manufacturer ought not to have liability. Yet in other cases, it might be unreasonable to expect a human driver, who has been encouraged by the manufacturer to take her eyes off the road, to intervene to avoid a crash if not notified that the computer driver is in trouble until the last second.

The law needs a response for all these cases. That response will be haphaz­ard, inconsistent, and uncertain if allowed to develop over time through case law decisions in the traditional manner of common law development. Moreover, the common law process can take a long time to clearly articulate a new legal prin­ciple to address a novel situation—often taking years or decades to achieve con­sensus across jurisdictions.

D. Conventional Mode

When an AV is operated in conventional mode with the driving automation system disengaged, the human driver generally has liability (just as during op­eration of a conventional vehicle without a driving automation system). The computer driver’s liability may extend for a brief period beyond the disengage­ment of the driving automation system to allow a reasonable human driver to assume safe operation of the vehicle in a transition from autonomous or super­visory mode to conventional mode.

II. Essential Structural Difference When Humans and Computers Share Driving Responsibilities

A legislature must make two decisions for liability attribution when humans and computers share driving responsibilities. First, a rational legislature should have no difficulty agreeing that the law should provide some nonzero time in­terval after a computer driver initiates a takeover request during which the hu­man driver does not have liability. This follows from indisputable limitations imposed by human nature.

Second, and more difficult, a legislature must determine the length of this nonzero interval. Our statutory architecture employs a two-part test to set this nonzero interval. To be conservative and hopefully achieve political consensus, our statutory architecture proposes a short ten-second safe harbor interval during which no liability attaches. In selecting this safe harbor, we are guided by the minimum time set by European regulation for similar situations, which received extensive consideration (as discussed below). After the expiration of the safe harbor period, the statutory architecture leaves the determination of a reasonable interval to courts and juries based on a context-sensitive facts and circumstances standards analysis.

We rely on a context sensitive approach after expiration of the ten-second safe harbor in an attempt to reach consensus—allowing both plaintiffs and man­ufacturers to make the case on familiar grounds found in tort law. We recognize that the length of the safe harbor will receive the most pushback from the AV industry. A legislature would, of course, be free to undertake empirical studies to set a longer safe harbor period, but we would not delay initial law reform to arrive at a more perfect number (which could be addressed by an easy technical amendment to an already agreed upon statutory framework).

Tort law applicable to conventional motor vehicle accidents has over time identified different categories of human-to-human interaction for which differ­ent liability analysis and factors are relevant. Consider an accident case type involving driver hand motions:

Vehicle A stops behind Vehicle B. The driver in Vehicle B makes a hand mo­tion to the driver in Vehicle A indicating that it is safe to proceed. Vehicle A proceeds in response to the “all clear” signal from the driver in Vehicle B but is hit by an oncoming Vehicle C.

In an accident case type such as this, the law has developed generally applicable ground rules for allocating responsibility. The generally applicable rule is that presence of a hand motion does not absolve the signaled motorist of her duty to use reasonable care in making highway maneuvers.

However, it remains a question of law whether the signaling driver can ever have contributory negligence for an accident by virtue of making the signal. A minority of jurisdictions hold that, as a matter of law, the signaling motorist has no duty of care when making the signal. The majority of jurisdictions take the opposite view, holding that under some circumstances, the driver who makes a gratuitous hand signal may have liability for a signal given negligently. Lia­bility of a signaling driver in this accident type is context sensitive in those ma­jority jurisdictions and depends on the details of the particular human-to-human interaction. Contributory negligence of a signaling motorist cannot be decided by reference to a generic accident type.

Introducing driving automation technologies complicates matters because computer drivers and human drivers can have shared responsibilities in which they take turns being responsible for safe operation of the vehicle. One must first determine whether the computer driver or human driver had responsibility for vehicle operation at the time of the incident. If the computer driver is en­gaged and performing steering on a sustained basis, in what circumstances can the human driver have contributory negligence for a failure to make an inter­vention? Are there some situation types in which, as a matter of law, the deter­mination is not context sensitive?

We make the case below that there are certain situation types related to human reaction time in which a human driver should not have liability as a mat­ter of law (as in the minority jurisdictions addressing the hand motion accident type), and others in which the determination of contributory negligence is con­text sensitive (as it is for most human-to-human interactions and as the majority jurisdictions treat the hand motion accident type).

The customary legal position manufacturers/defendants take is to find fault with the human driver for failing to avoid a crash in any accident involving a computer driver when a human driver is present. Using human drivers as scapegoats to shield manufacturers from liability for harm caused by an emerg­ing technology is not simply unjust.

Shifting the cost of accidents onto consumers and the general public re­moves important incentives to improve safety. Until now, proponents of auto­motive companies could argue that shifting liability to a human driver was an effective business strategy because the under-theorized state of the law provided room to maneuver. Our approach provides a structure to remedy the situation with the least amount of disruption to existing legal doctrine and practice—an important step, as it is becoming increasingly clear that the status quo “blame the human” approach places many human drivers in untenable liability posi­tions.

The advent of automation features that operate when the human driver is not continuously involved in the tactical driving task renders the strategy of blaming the human driver for all accidents unworkable. The legal system should not find fault with a human driver who takes advantage of the advertised benefits of driving automation to watch a movie on an in-vehicle infotainment screen (or engage in other activities) when a crash results from the dangerous behavior of her computer driver while she was not even looking at the road. There must be times at which the computer driver has a default presumption of responsibility, despite the presence of a human driver.

Additionally, even if the computer driver warns a human driver to start pay­ing attention to the road or resume primary control of driving, the transfer of responsibility for safe driving does not occur at a discrete instant. Rather, the transfer of responsibility is a process that requires a minimum amount of time for responsible completion.

Liability during at least some initial duration of this transfer of control pe­riod should not be context sensitive because of the physical abilities and limits of human drivers: there is a minimum reasonable length of time that a human driver should have to react and assume control of the vehicle for safe operation without incurring liability for contributory negligence. The law should set a minimum lower temporal bound, after which there might be potential attribution of contributory negligence to the human driver. Responsibility for any accident, collision, or other incident that occurs at or within this minimum lower bound should not, as a matter of law, be attributed in whole or in part to the human driver. Above this minimum lower bound, a court may determine attribution and allocation of liability in the usual context sensitive way, taking into account the reasonable time required for the transition from computer driver to human driver in that particular situation for that particular automated vehicle’s operational concept. Depending on the circumstances and a jury’s determination of reason­able human driver responses in each scenario, a human driver may have no con­tributory negligence, some contributory negligence, or full responsibility after the minimum lower bound has expired.

We suggest setting this transfer window during which a human driver has no liability for contributory negligence at ten seconds. This selection does not indicate that the human driver should always be found negligent if she takes longer than ten seconds to intervene to avoid an accident. Rather, it means that the human driver should never be found negligent if a crash happens less than ten seconds after the computer driver requests a transfer of control. Beyond that time, any finding of fault should be context dependent.

Once a human engages the computer driver, the computer driver has full responsibility for safe operation of the vehicle indefinitely. That responsibility might be transferred back to the human driver. However, in any such transfer back, this computer driver’s full responsibility continues during a blackout win­dow of ten seconds during which the law may not assign contributory negli­gence to the human driver (absent a malicious intervention). After the expiration of the blackout window, the court determines contributory negligence just as it would in a conventional motor vehicle accident case. This may include a judicial determination that, based on the particular facts of the case, the human driver reasonably needed more than ten seconds to take over safe operation of the ve­hicle.

Every reasonable person would agree that some minimum lower bound is appropriate. Nobody can react within zero seconds to an imminent threat of harm, including both time to notice that the computer driver is unable to handle a driving situation, and time to physically intervene to regain physical vehicle control. So, it is not a question of if, but rather of how long of a safe harbor should be allocated as a grace period before transferring responsibility to a hu­man driver.

The issue for legislative decision is specification of the time-period thresh­old above which the minimum lower bound has been satisfied. We recommend a ten second threshold as a conservative measure with which we expect no seri­ous disagreement for several reasons. First, this is the amount of time recom­mended by the ALKS standard in a low-speed situation for highway traffic jam pilot-type ADSs. It may also be reasonable to specify a higher statutory num­ber in high-speed or other more complicated scenarios. This indicates ten sec­onds is a reasonable minimum lower bound in every case as a starting point, pending further experience with the technology that might motivate more strin­gent requirements on computer drivers. Second, empirical data from actual crashes indicates that a fatal accident can occur within ten seconds after activa­tion of an automated driving feature. Third, the well-known phenomenon of automation complacency confirms that it is completely unreasonable to expect an instantaneous transfer of responsibility for safe operation of a vehicle. Fourth, J3016 recognizes that an unspecified “several seconds” of speed re­duction is appropriate to allow time for a “DDT fallback-ready user to resume operation of the vehicle in an orderly manner.”

We apply this same ten-second safe harbor to facilitate a transfer of control in two mutually compatible ways: the time required for a human driver to inter­vene in vehicle control when there is an evident need to do so and the time given to a human driver to cure a lapse in attention after an alarm from a driver mon­itoring system.

III. Complexity of Determination Of Human Reaction Time Requires Statutory Intervention

The law needs to set reasonable expectations about minimum reaction times afforded to human drivers when operating in a situation which, by design, di­vides responsibility for driving between a human and a machine. The minimum reaction time should be a legal constant across different highway and road sce­narios and across jurisdictions—and not context sensitive—because inherent limits to human response times are a feature of human nature, which is the same across all cases.

The science behind determination of reaction times is complex, with factors such as a person’s age significantly affecting individual response time capabili­ties. However, the law does not specify a shifting standard of negligence liability for ordinary torts depending on the specific abilities and reaction time ability of an individual defendant. Rather, tort liability is set by reference to a hypothetical reasonable man—an objective standard.

Similarly, the law can and should specify a uniform minimum grace period for human intervention during autonomous mode operation, because otherwise advertisements that offer to give drivers and occupants their time back are chi­meras. No system will really “give back time” to anyone if the risk of contrib­utory negligence lurks in the background with no grace period afforded to response time delays and no allowance for the intrinsically imperfect concen­tration that is the most that can be expected of human beings.

There needs to be a uniform minimum allowance for human drivers to shift modes from monitoring automation to driving the vehicle, and a requirement to manage driver attention in a reasonable way to mitigate the inevitable effects of automation complacency. A bare minimum safe harbor for human drivers should be codified by statute rather than haggled over in the courts in a likely incon­sistent way over a period of many years, because reaction time and auto­mation complacency are features of human nature common in all cases.

Proper attribution and allocation of fault to a plaintiff is important in negli­gence actions in all states because a defendant may assert ordinary contributory negligence as an affirmative defense to a negligence action. Proper attribution and allocation of fault to a plaintiff also can be important as an affirmative de­fense to a claim for strict products liability in many states.

IV. Automated Vehicle Designs That Rely on Human Intervention

Many AV designs contemplate that a human occupant in an AV may inter­vene to take over control of the vehicle in certain circumstances. Even if an automated vehicle’s design allows for a human driver to engage in other activi­ties during a trip, the human driver may have the ability to either assume control of the vehicle or, at least, terminate the trip (bringing the vehicle to a stopped condition without undue risk). During a single itinerary, control of the vehicle may transfer from machine to human and back again multiple times, and human drivers might at times be told (or reasonably infer based on a manufacturer’s messaging) that they can take their eyes off the road or even take a nap.

Even when testing platforms require a trained, professional test driver’s continual supervision it may be unreasonable to expect a human driver to ensure crash-free behavior in response to ADS system behaviors. As an example, in 2022 a heavy truck test platform hit a center barrier at highway speed, narrowly missing a collision with an otherwise uninvolved public road user’s vehicle in an adjacent lane. This occurred despite the test driver reacting quickly and ap­parently in a proper way but not in a fully effective attempt to counteract an unexpected and clearly unsafe sharp turn command executed by the computer driver at highway speeds.

If the law provides that a computer driver may have liability for negligent driving (as we suggest in Winning the Imitation Game), it also needs to clearly set forth when and under what circumstances the failure of a human driver, other human occupant, or remote safety supervisor to respond appropriately to a re­quest for intervention (either by failing to intervene or failing to perform a rea­sonable intervention) will constitute negligence. The proposed legal architecture uses the different operating modes to determine contributory negligence of a human driver or other natural person who might prevent or lessen the severity of an accident.

These rules apply both (1) when a computer driver’s advertised design con­cept requires the human driver to pay constant attention to road conditions de­spite driving automation features that allowed sustained computer control of steering (such as required by Tesla owner’s manuals) and (2) when a computer driver’s design concept permits a human driver to engage with other tasks while driving automation features are active (such as Mercedes-Benz Drive Pilot—which is advertised as an SAE Level 3 feature but which might, in fact, be a Level 2 feature). One benefit of using our suggested operating modes is that liability does not depend on a Level 2 versus Level 3 classification or determi­nation. Rather, liability attaches based on the risks posed by an automation feature. We explain the structure of the rules below.

V. Liability Attribution in the Different Operating Modes

A. Testing Mode

In general, the proposed liability attribution rules provide that the AV man­ufacturer is responsible for losses from accidents, collisions, and other loss events when a vehicle is operating in testing mode (subject to limited excep­tions), regardless of whether the human test driver or the computer driver is steering the vehicle on a sustained basis. Placing this liability on the manufac­turer prevents unjust enrichment by allocating a cost to the permission granted by the state to the manufacturer to use public highways and roads for testing, which otherwise has no substantive cost. The fair cost allocation requires the manufacturer to pay for accidents proximately caused by its testing activities.

The rules also provide common sense exceptions to liability if the negligent or malicious actions of another motorist or other road user proximately cause the accident or collisions with the automated vehicle. While a test driver may independently have liability for failure to properly perform the duties of a test driver, a finding of test driver liability for failure to provide supervision to pre­vent loss does not relieve the manufacturer of liability. In testing mode, the manufacturer assumes responsibility for the actions of its employee or agent test drivers and should not have available the defense that the test driver was on a frolic and detour or otherwise operated outside the scope of her authority. The statutory architecture contemplates that to obtain a testing permit and conduct testing in compliance with law, the manufacturer must only use test drivers who are its employees or contracted agents. As a supplement, a state may require that the manufacturer must test in compliance with the SAE J3018 test driver safety standard and implement a best-practice safety management system.

B. Autonomous Mode

The rules provide that the manufacturer is responsible for losses from acci­dents and collisions when the computer driver is operating negligently in auton­omous mode. The manufacturer is the responsible party because, for vehicles operating in autonomous mode, a human occupant need not pay attention to the road or remain prepared to take over control of the vehicle. Indeed, manufactur­ers intend for autonomous mode operation to provide several benefits. For ex­ample, one benefit is that it allows the occupant to sleep during the itinerary; another benefit is that it improves access to transportation for people who are themselves unqualified to operate vehicles.

The rules provide exceptions to manufacturer liability if negligent, reckless, or malicious actions of another motorist or other road user proximately cause an accident or collision with the computer driver’s vehicle. The rules also provide that a vehicle occupant may have liability for a malicious intervention during autonomous mode operation. A malicious intervention which proximately causes an accident or collision can also eliminate liability for the manufacturer.

During autonomous mode operation, the occupant of an automated vehicle, if there is one, has no duty to pay attention to the road or to honor a request for an intervention to take control of the vehicle. Interventions by a human driver or occupant are permissive and not mandatory. No human driver or occupant can have contributory negligence for inattention or failure to intervene.

Moreover, no occupant has liability for a reasonable permissive interven­tion undertaken in response to a request to intervene or in response to a per­ceived system failure or exigent circumstance. If the computer driver places a human occupant in an exigent circumstance or dangerous situation, the human occupant should not be at fault for any attempt to prevent injury or death. We expect human drivers and occupants to act in accordance with survival instincts (which a computer driver does not possess) when the AV operating in autono­mous mode fails to keep the occupants out of harm’s way and an occupant no­tices the imminent danger. Interventions might include emergency stops (e.g., to avoid entering flood waters or keep-out yellow warning tape areas marking a road hazard not detected by the computer driver), or emergency motions (e.g., to clear railroad tracks if the computer driver stops on a railway grade cross­ing). Whether an intervention turned out to be necessary to ensure safety in hindsight should not be relevant to the analysis. It should be sufficient if, at the time the intervention was made, the concern for safety prompting the interven­tion was reasonable and the intervention was performed in good faith.

C. Supervisory Mode

When an AV is operating in supervisory mode, the liability attribution rules generally place negligence liability for losses on the human driver or the com­puter driver depending on which driver is controlling steering on a sustained basis prior to a mishap, and under what circumstances. Subject to limited ex­ceptions, the manufacturer has liability for losses from accidents and collisions occurring while the computer driver is engaged and operating negligently in su­pervisory mode, subject to four limitations.

Limitation 1 is that the human driver has contributory negligence liability for failing to regain attention and assume control following a timely and reason­ably effective driver monitor alert. The amount and type of attention and alerts will depend on the specifics of the AV’s operational concept. However, the pre­sumption is that if the computer driver issues a driver monitoring attention alert, the human driver must respond by restoring attentive behavior to avoid incur­ring negligence liability for any accident or collision that might occur at or after ten seconds from the start of the warning. The degree to which the human driver needs to be attentive is determined by the manufacturer’s operational con­cept for the AV. The desired functionality is that the computer driver should monitor to ensure the human driver is displaying the level of supervisory atten­tion required for the computer driver to operate without undue risk, given the operational concept.

Limitation 2 is that the human driver has contributory negligence liability for failing to take over control of driving in a timely and effective manner when it is reasonably evident that there is a need to do so to ensure safety, and it is practicable for a competent driver with reasonable skill to do so in a way that avoids harm given the circumstances. To be reasonably evident, the need for the human driver to take control of the vehicle must be based on reasonably observable information indicating to the human driver that the computer driver is unlikely to continue providing safe driving operation. Reasonably observable information includes road conditions, actions by other road users, historical computer driver behavioral norms (e.g., expectations of the specific human driver involved, set by previous trips in a particular AV model), vehicle equip­ment failures, and any evident sources of technical impairment of the computer driver. The practical solution is to allow the human driver a minimum ten-second window to take over safe operation of the vehicle, measured from the time that she might reasonably have discerned a need to take over.

The “reasonably discerned” qualifier is essential. The human driver is not expected to be an expert in the internal workings and potential faults of the com­puter driver. Therefore, any threat to safety that is not readily evident to a typical human driver (a “reasonable man” driver, not a trained specialist) must be iden­tified and announced by the computer driver (e.g., via a takeover alarm) to ini­tiate a transfer of liability from the computer driver to the human driver. Factors to consider in whether the need for a takeover is reasonably evident would in­clude: whether the computer driver issues a takeover alarm, whether the current behavior of the computer driver in response to a potential safety threat is mark­edly inconsistent with its customary behavior in a situation that has previously mitigated a hazard with no intervention, and whether a situation ought to be so obviously dangerous to an attentive supervising human driver that a dramatic maneuver such as a panic braking maneuver is clearly warranted.

Even though a human driver might be attentive, it is possible for the com­puter driver to put the human driver in an unrecoverable situation. This is espe­cially true if the computer driver conducts a sudden, dramatic maneuver that might lead to an accident or crash, such as suddenly swerving into oncoming traffic or swerving into a tree or other obstacle on an otherwise clear and empty roadway. So too, the computer driver should not be able to use an alertness warning of questionable validity (or even just turn off entirely) as a tactical tool to shed blame onto a human driver immediately before an impending collision. Once a computer driver assumes sustained control of steering, liability should only be shifted back onto the human driver in situations that permit the human driver a reasonable chance, including sufficient reaction time to cure any drift in attention or regain both situational awareness and control ability over the ve­hicle to resume safe driving. We set that time as a minimum of ten seconds in all cases, with the potential for a court to decide a longer time is appropriate if justified by the circumstances.

The liability of the AV manufacturer commences in supervisory mode once the computer driver engages. Exclusive AV manufacturer liability potentially ceases (1) ten seconds after a driver monitoring system sounds an effective alarm or other alerts designed to reestablish the human driver’s attention if the driving automation system determines that the human driver is inattentive; (2) ten seconds after the computer driver makes a request for the human driver to take over control of active steering on a sustained basis due to a system fault detected by the driving automation system; (3) ten seconds after the computer driver makes a request for the human driver to take over control due to an oper­ational design domain (ODD) exit detected by the driving automation system; and (4) ten seconds after the computer driver makes a request for the human driver to take over control due to a driving automation system determination that the computer driver is unable to continue operation without undue risk.

The liability of the manufacturer ceases ten seconds (or more) after a hazard becomes reasonably evident even if the computer driver does not activate a take­over request if a readily observable road hazard is encountered and the human driver providing supervision is both (1) shown to be alert in fact (regardless of whether any driver monitor detects a deficit in alertness) and (2) has reasona­ble time to respond to mitigate the road hazard by taking over control of steering and other vehicle motions. The degree of alertness required and length of time that is reasonable will depend on both the AV’s operational concept and the haz­ardous situation—with a ten-second window as the standard minimum amount of reaction time, potentially with a longer time if appropriate to the sit­uation. The possibility of a transfer of liability to a human driver notwithstand­ing, the computer driver retains liability if it does not also implement a best-effort hazard mitigation maneuver in response to the detected situation even after a reasonable response time from the human driver has elapsed. The ten-second window for both driver-monitoring alerts and evident-need-to-intervene responses run con­currently if the two situations should overlap.

As a concrete example, consider a vehicle operating in supervisory mode encountering a stopped fire truck in a travel lane. The computer driver would be negligent for crashing into the fire truck unless the crash were caused by one of the following situations:

· The human driver had become inattentive and an effective driver moni­toring alarm had activated and continuously attempted to regain driver attention for more than ten seconds before the crash, but the driver re­mained inattentive and therefore was not able to recognize and respond to a potential crash.
· The human driver was in fact as alert as required by the vehicle’s opera­tional concept but failed to respond to an evident need to take over ve­hicle operation. The human driver has a duty to intervene when there is an evident need to do so, but is not expected to have superhuman response times, to have extraordinary driver skills, to detect other than obvious computer driver limitations, to compensate for computer driver design de­fects as a test driver might, and to enforce limitations on acceptable oper­ational conditions that are not identified by the computer driver in the form of mandatory intervention requests.
· The human driver performed a malicious intervention.

Another road user (but not the human driver of the AV) might instead be liable due to negligent, reckless, or malicious actions of another motorist or other road user that proximately cause an accident or collision with the AV.

D. Conventional Mode

When a vehicle is operating in conventional mode, the human driver is re­sponsible for negligence losses (subject to ordinary exceptions). The computer driver may have liability for operation in conventional mode if the system as­sumes control of some or all of the dynamic driving tasks in a manner that a reasonable human driver would not expect, and the unanticipated assumption of control by the computer driver proximately causes an accident or collision. There might also be computer driver liability if the human driver reasonably believes that an automated driving feature has been engaged (e.g., due to an acknowledgement chime in response to an engagement request that the human driver normally associates with an engagement of autonomous or supervisory mode) when in fact it has not.

This provision for computer driver liability applies even if the computer driver does not transition to providing sustained steering of the vehicle. For ex­ample, if the computer driver induces a momentary extreme steering command or initiates a panic brake for no reason (often called “phantom braking”), the computer driver would have liability even if not engaged by the human driver.

E. Mode Changes

Changes between modes carry with them the possibility of a liability burden shift as well as potential confusion about what the responsibilities of the human driver might be. Additionally, a mismatch between a human driver’s expectation of the current operational mode and the actual operational mode can lead to mishaps. Mode confusion, in which a human driver has a different mental model of the current operational mode than the computer driver, has been found to be a significant source of risk in other domains such as aviation. If a human driver is non-maliciously confused about the current operational mode, liability for any crash rests with the computer driver for not ensuring that the human driver is aware of the current mode.

Additionally, laws should not allow the computer driver to unilaterally force a mode change onto a human driver as a way of shedding blame for an impending crash or inability to operate. A computer driver in supervisory or test mode can use a driver takeover request to transfer liability to the human driver. As a practical matter, a driver takeover process might end with a transfer to conventional mode, so long as the mode change is readily evident to the human driver.

A computer driver in autonomous mode might request a transition to super­visory or conventional mode, but does not have the right to demand or force such a mode change during vehicle operation. Once a request to change into autonomous mode has been accepted by the computer driver, the computer driver cannot unilaterally exit autonomous mode without an explicit takeover action from a human driver. At the end of a driving cycle, a computer driver might transition to an “off” state, for example, once the vehicle is safely parked, and exit autonomous mode in that manner.

VI. Identification of Likely Accident & Collision Scenarios

The need for an effective approach to liability when computer drivers play a role in a crash is far from an abstract hypothetical issue. Crashes are already happening involving property damage and injuries that are attributable in part or in whole to computer driver actions that a potential plaintiff could reasonably characterize as potentially negligent in a claim for compensation if a human driver exhibited the same behavior. Some examples include the following:

· A vehicle suddenly switched lanes and slowed its speed from 55 to 20 mph leading to a multicar pileup with nine people, including one juvenile, being treated for “minor” injuries. The driver claimed that automated steering was activated at the time of the crash and caused the sudden swerve.
· The National Highway Traffic Safety Administration (NHTSA) has long-standing, still-open investigations into the supervised use of a computer driver crashing into emergency response scenes. As of June 2022, there had been fifteen injuries and one death attributed to the use of the com­puter driver on a particular vehicle make.
· A robotaxi developer issued a recall after another vehicle struck one of its robotaxis while making an unprotected left turn. While the company claims that the oncoming vehicle was more to blame for the multi-injury crash, it does not hold that it has no blame, making contributory negli­gence a potential factor if any lawsuit were to arise from the crash. It is not out of the question to make a case, at least in some states, that stopping in an oncoming vehicle’s travel lane while making a left turn and then being hit is negligent driving behavior on the part of the vehicle turning left, regardless of any contributory road rule violations by the other vehi­cle.
· A review of the initial data set released by NHTSA as part of their stand­ing general order data reporting requirement for SAE Level 2 and above automated vehicles included nearly 400 crashes serious enough to trigger a reporting requirement (generally involving an air bag deployment, re­ported personal injury, or tow truck) over 10 months. Those crashes in­cluded six fatalities and five serious injuries—that the car makers knew about to report. Crashes have continued to occur as reflected by subse­quent data releases.
· A vehicle that required human driver supervision ran a red light and hit a crossing vehicle, resulting in two fatalities. That human driver faced fel­ony criminal charges.

Based on these mishaps, it is clear that regardless of industry hype about AV safety, crashes involving the technology can be expected to occur. In some cases, lack of proper human driver supervision might be a contributing factor, but in others (especially in vehicles with no human driver tasked with monitor­ing the computer driver’s road behavior), the responsibility for negligent driving behavior must rest entirely and unambiguously on the computer driver.

We can identify some illustrative accident and collision scenarios which we think it likely the law will need to address in the near future. Actual accidents and collisions involving existing driving automation systems motivate some of these scenarios. When we can identify situations in which the courts must re­solve questions of liability, legislators can best promote judicial economy by providing an amendment or supplement to their statutes which addresses the expected uncertainty.

VII. Accident Scenarios Expressed in Terms of SAE Levels

Most, if not all, state regulations that address automated vehicles are cur­rently keyed to the “levels” in SAE J3016, which range from 0 to 5. Typically, state regulations refer to “highly automated vehicles” (HAVs), which are de­fined as Levels 3–5, with Levels 0–2 being regulated as conventional vehicles (which means for practical purposes, Level 1–2 vehicle automation features are unregulated except for standing general order data reporting requirements and via potential NHTSA recalls). In some cases, J3016 is explicitly referenced, and might even be incorporated by reference. At other times, terminology has been cut and pasted from J3016 without a reference. Either way, any use of the defined J3016 levels as a basis for general regulation is unsuitable for liability purposes. A Level 2 feature which controls steering on a controlled basis might be more or less safe than a Level 3 vehicle controlling steering on a sustained basis along with other functions. The driving automation system in a Level 3 vehicle is no more or less safe than in a Level 4 vehicle simply based on the level that corresponds to its design capabilities. The stated level of an automa­tion feature is not predictive of its operational safety. Even a dramatically de­fective automation feature might meet the technical requirements to be designated at a high J3016 level.

Importantly, J3016 is not a safety standard, nor does it purport to be. Indeed, specification of safety is beyond its scope, despite its use (or misuse) in existing laws and regulations. J3016 is not even a fully established engineering standard. It is an “information report” (not an actual standard) containing a taxonomy of definitions to facilitate technical communications about driving automation sys­tems technology and the capabilities of various automation features. Its initial version in 2014 did not contemplate its use in law or regulation. Without sup­plementation of its basic initial structure, the 2016 version included a reference to possible use of the taxonomy for legal purposes. This reference remains in the current 2021 version. Despite this reference to possible legal or regulatory use, current law will have a gap even if a legislature decides to incorporate J3016 by reference or borrow its language for use in laws and regulations.

Among the reasons that J3016 is not suitable for use with liability are the following:

· It bases levels on “manufacturer intent” rather than vehicle capability dis­played on public roads. This makes it easy to aggressively game the de­clared intent of levels to evade regulatory and liability requirements by declaring that any vehicle is “intended” to be Level 2, and therefore not subject to state regulations on automated vehicles. This technique can be especially problematic if a safety driver for a bug-ridden test vehicle is instead said to be a Level 2 fallback ready user (i.e., a normal supervisory human driver), resulting in unregulated public road testing.
· Level 2 vehicles fully automate the control of vehicle motion, but require neither driver monitoring nor automated enforcement of the ODD. It is inevitable that such an approach will lead to automation complacency and subsequent blame being placed on human drivers for, in essence, not be­ing superhuman.
· It defines the term automated driving system (ADS) based on being at Level 3 and above, implicitly excluding from scope any discussion of li­ability associated with Level 2 systems, and even steering-only Level 1 systems.
· A number of technical details make the definitions problematic for use with liability. As an example, a commonly held understanding is that with a Level 3 system, the ADS is supposed to alert the human driver to the need to take over and ensure a delay; however, SAE J3016 provides for both no alert and no delay in some circumstances. For liability purposes, the proposed framework described herein addresses those topics in a con­crete manner, whereas J3016 leaves considerable room for uncertainty about how driver liability would be assigned for an equipment failure that does not result in the computer driver providing an explicit takeover re­quest to the human driver.

To the maximum degree practicable, the use of terminology and concepts within this framework does not conflict with J3016. However, due to the unsuit­ability of using J3016 as the sole foundation for a liability approach, we have defined complementary terms and concepts.

Despite SAE J3016 not being a safety standard nor demarcating different levels of risk, we set forth below different scenarios described in terms of SAE levels for which our structure of operating modes proves useful for analysis. In each scenario, it can be instructional to ask oneself whether the human driver or the computer driver should be responsible for causing or failing to avoid a crash.

A. Case 1

The human driver of a vehicle with an “SAE Level 2” sustained automated steering feature engaged is diligently monitoring the performance of her vehicle on a divided highway, following her normal daily commuting route. Upon en­tering a tunnel, the vehicle suddenly swerves hard to the side, cutting off other traffic in the high-speed lane, impacting a tunnel wall. Other vehicles crash into it, forming a pileup. Several occupants of other vehicles are injured, and one is killed. Subsequent analysis finds that the Level 2 feature was being used as re­quired by manufacturer instructions, but a reasonably attentive human driver would not have been able to react to such a dramatic, unexpected swerve in time to avoid the crash. Drivers in other vehicles were following safe vehicle spacing best practices for the conditions, but could not have avoided the pileup due to the unexpected swerve and crash.

This scenario is based on a real-life mishap in November 2022 involving a Tesla vehicle with autopilot engaged which resulted in injuries, but fortunately no fatalities.

B. Case 2

The driver of a vehicle with an “SAE Level 2” feature engaged, which ad­vertises that it is capable of fully self-driving (but with human driver supervision also required), does not properly respond to activated and highly conspicuous school bus warning displays, and the vehicle injures a debarking student. The human driver has previously experienced that the vehicle comes to an aggressive stop only a few feet from such a school bus, and thus waited until the usual short distance was reached before realizing something was wrong. After that short distance had been reached, the human driver had insufficient reaction time avail­able to process the failure to stop, assert control, and avoid the crash.

This scenario is inspired by a real-life mishap that involved a Tesla vehicle in March 2023 that is being investigated by NHTSA. The deviation from a normally expected, last-second stopping behavior aspect of this scenario is hy­pothetical.

C. Case 3

The driver of a vehicle with an “SAE Level 3” feature engaged has been told she is permitted to take her eyes off the road so long as she is available to intervene when requested. On a routine drive, the takeover alarm sounds. In the human driver’s experience, takeover alarms are uniformly of low urgency, indi­cating the end of a drive on a particularly benign piece of roadway that is a normal part of the commuting route. The driver looks up to see that her vehicle is going at the full speed limit approaching a red traffic light with insufficient distance to stop. A child (obeying her pedestrian “walk” signal) is in the cross­walk directly in front of the vehicle. The driver slams on the brakes, but the child is hit anyway. Subsequent analysis finds that the Level 3 feature was being used as required by manufacturer instructions, but a 50th percentile driver would not have been able to stop in time, given the late warning and prevalent road condi­tions.

This scenario is inspired by a real-life fatal mishap that involved an Uber ATG test vehicle in 2018, which involved a test vehicle that failed to see a pe­destrian at an unofficial road crossing point, rather than the series production vehicle that is hypothesized in this example scenario.

D. Case 4

The driver of a vehicle with an SAE Level 4 feature is riding as a passenger, trusting the vehicle to handle driving safety. The passenger happens to notice an overturned truck in the road ahead. Trusting the technology, which she has been relentlessly told is safer than a person driving, she goes back to watching the scenery out the side window. Unfortunately, the vehicle crashes into the over­turned truck. Subsequent analysis finds that the crash could have been avoided if the passenger had pressed the big red “emergency stop” button in the passen­ger compartment, but the passenger did not realize this was expected of her. Moreover, the passenger was a 16-year-old who was using a Level 4 robotaxi instead of a private vehicle due to having failed her driver test.

This scenario is inspired by a Telsa Autopilot crash into an overturned truck, with the presence of an unqualified passenger instead of a qualified driver being introduced as a hypothetical.

E. Case 5

The driver of a vehicle with an SAE Level 4 feature is riding as a passenger, but notices that the car has not changed lanes to avoid a fire truck parked at an emergency response scene, and is continuing at full highway speed. Judging that there is not enough time left to brake to a stop, the passenger (who has a valid driver license) takes over vehicle control and swerves into an adjacent lane, sideswiping another car. An ensuing multi-vehicle crash results in severe inju­ries. Subsequent analysis shows that the ADS detected the adjacent vehicles and would have slowed to only three miles per hour at the time of the crash due to its planned use of extreme braking force, resulting in no substantive damage and no injuries if the passenger had not intervened.

This hypothetical scenario uses a real Tesla crash into a fire truck as a point of departure.

F. Case 6

A driver supervising the testing of an SAE Level 4 feature permits the ve­hicle to enter an intersection. Another vehicle enters the same intersection and begins performing “donuts” (recklessly spinning in circles in the intersection with high engine power, which in this case is being done by a manually driven vehicle) in an apparent attempt to harass the test vehicle. The safety driver lets the computer driver proceed to make a left turn at the intersection, but is hit by the reckless vehicle.

This scenario is inspired by a real-life mishap that involved a Cruise LLC testing vehicle on March 6, 2023.

G. Case 7

An SAE Level 4 robotaxi runs through emergency scene yellow tape and becomes tangled in live power lines that came down during a storm that same night. A passenger in the vehicle panics and leaves the vehicle, only to be elec­trocuted.

This scenario is inspired by multiple uncrewed Cruise robotaxis entering a downed power line scene and getting both power lines and emergency scene tape tangled on their sensors. Fortunately, the power lines were not live (al­though there would be no way for a passenger to necessarily have known that at the time), and the robotaxis happened to be empty.

All of these crashes have a basis in prior incidents, though mostly with less severe consequences. As more automated vehicles are tested and deployed, the law will inevitably confront more crashes like these, as well as others we have yet to imagine.

VIII. Why the Law Should Use Steering on a Sustained Basis to Allocate Liability

The most significant risks from driving automation systems surface when a driving automation system steers a motor vehicle on a sustained basis. For com­pletely automated vehicles, steering is always automated. However, for vehicles that can operate with shared computer driver and human driver responsibility for safety, steering serves as an important litmus test for determining whether the human driver is actually engaged in driving, or is instead watching the com­puter driver operate the vehicle.

Automated steering is the most significant risk because steering on a sus­tained basis by a computer driver creates the well-documented phenomenon of automation complacency in a human driver. Vehicles that require either con­tinuous human driver supervision or a human driver to be immediately re­sponsive to takeover requests issued by the computer driver are subject to degradation of system safety due to automation complacency. This is not simply a matter of a human driver who is lazy in paying attention, but rather is a fun­damental cognitive limitation of all human drivers.

The approach of using automated steering as the litmus test for determining if a computer driver is active differs from the common legal approach based on SAE J3016 Levels 3–5, often referred to in aggregate as HAVs, because using steering on a sustained basis as a litmus test also includes all Level 2 features, and even some possible Level 1 features. However, the U.S. federal regulator NHTSA has, in practice, begun regulating SAE Level 2 vehicles on a par with HAVs by requiring Level 2 vehicles to report crash data in a manner similar to HAVs. Thus, there is precedent for treating Level 2 vehicles as having com­puter drivers.

IX. Event Data Recording Features to Assist with the Liability Attribution & Allocation

The limitations to liability for the computer driver are based on two factors that are amenable to in-vehicle monitoring: (1) the alertness posture of the hu­man driver, and (2) whether the specifics of any particular crash were amenable to an effective human driver intervention to mitigate or avoid harm. Both of these factors should motivate manufacturers to install instrumentation to meas­ure and record both driver alertness and situational understanding for events leading up to any accident. In the absence of evidence indicating a proper trans­fer of control to a human driver, liability will remain with the computer driver once the computer driver assumes control of steering on a sustained basis.

While not required to implement the proposed negligence laws, equipment specifications regarding data recording prior to crashes could greatly assist de­termination of negligence liability for a computer driver. The proposed liability rules create an incentive to include instrumentation for recording events relevant to the shift in liability back to a human driver. A draft bill being circulated for comments contains the outline of this type of equipment specification.

“3) EVENT DATA RECORDERS.—

“A) IN GENERAL.—Not later than 5 years after the date of the enact­ment of this section, the Secretary shall issue a final rule updating part 563 of title 49, Code of Federal Regulations, to—

“i) specify requirements for the collection, storage, and retrievability of event data of partially automated vehicles and highly automated vehi­cles to account for, as practicable—

“I) whether the partial driving automation system or automated driving system was performing the entirety or subtasks of the dy­namic driving task;

“II) the occurrence of a malfunction or failure of the partial driving automation system or automated driving system;

“III) whether the partially automated vehicle or highly auto­mated vehicle was operating within its operational design domain when the partial driving automation system or the automated driving system was performing the entirety or subtasks of the dynamic driv­ing task;

“IV) the performance of the dynamic driving task; and

“V) additional event data needed to assess the performance of the vehicle; and

“ii) update pre-crash data elements to account for, as practicable, the performance of advanced driver assistance systems.

If Congress enacted proposed legislation of this sort, and NHTSA issued appropriate regulations to implement the law, administration of an architecture for negligence liability for computer drivers could function smoothly and at very low cost.

Existing federal motor vehicle safety standards (FMVSS) already require the collection of important data that could help make this determination, but that data does not contemplate the role a computer driver might play in vehicle op­eration. Data that indicates whether the computer driver was engaged at the time of the crash and, if not engaged, at what point in time the computer driver ceased to be engaged, would help allocate liability for operation when autonomous and supervisory mode use might be relevant to a crash investigation. This is a very easy feature to add as an engineering matter. If law enforcement officers could access this information through the already mandated OBD-II data access port in each vehicle, it would greatly facilitate production of accurate and useful po­lice reports. In addition, retaining and producing data from video sensors in per­haps the three minutes prior to an accident or collision would greatly assist any determination of negligent or malicious behavior by other motorists and third parties.

A forensic ability to report the operational mode and indicia of driver atten­tion will be especially important for at least the ten-second window before a crash to correspond with the ten-second liability transfer window, although 30 to 90 seconds would be preferable. The operational mode at the time of a crash does not necessarily reflect whether the computer driver caused a situation that put the human driver in a no-win situation in which a crash was inevitable, whether the human driver had lost attention and regained attention only at the last second, or whether the computer driver performed an improper mode change without giving the human driver the benefit of a reasonable handoff pro­cedure.

Capturing and preserving crash data will require a more nuanced approach than current EDR mechanisms. As currently designed, EDRs snapshot data im­mediately preceding somewhat severe crashes based on experiencing a high de­celeration spike. It is common for EDRs to fail to capture data for low-speed events (especially ones that do not involve airbag deployment). Crashes into a pedestrian that do not dramatically change the speed of the vehicle at impact are particularly problematic for that type of data recording trigger. While EDR data requirements will need to be updated to provide robust forensic crash data rele­vant to computer drivers, the triggering mechanism will also need to change to be related to mishap scenarios detected by the computer driver, regardless of whether that mishap happened to involve the vehicle decelerating dramatically due to hitting a high-mass or rigidly fixed obstacle.

While EDR standards are maturing, there should be no incentive for com­puter driver manufacturers to fail to retain data relevant to crashes in an attempt to provide themselves with plausible deniability. A failure to collect data that would normally be available during system operation should not form a basis of transferring liability to a human driver. Rather, in the absence of data, it should be assumed that any data that might have been collected but was not would tend to show the computer driver to be negligent. This approach incentivizes, but does not create equipment requirements for, manufacturers to collect and retain data on computer driver and human driver behavior for a reasonable amount of time before a crash or accident.

In the absence of federal law on EDR systems, state law nevertheless might structure presumptions to encourage manufacturers to include these features as part of their driving automation systems. A simple presumption might be that, at the time of the accident or collision, the computer driver was active. If internal data reflects the state of the computer driver at the time of the accident or colli­sion (and during a ten- to 90-second prior interval), that data must be provided to a prospective plaintiff, law enforcement, and insurance providers free of charge.

Conclusion

A legal system needs rules which produce an equitable, fair, just, and cost-effective attribution and allocation of responsibility for loss events, crashes, and accidents involving automated vehicles. A key issue that these rules must ad­dress centers on the scope of potential liability of a plaintiff for contributory negligence in any incident. If a defendant can successfully assert contributory negligence as a defense in almost every case based on a simple failure of a human driver to intervene to prevent an accident (regardless of whether it was reasonable to expect a competent driver to do so given the specifics of the situ­ation), the human driver functions as a “moral crumple zone” which insulates a manufacturer from liability for losses which a neutral observer or reasonable person would fairly attribute to a technology failure.

The effective elimination of liability (or its substantial reduction), created by a failure of legislatures to act in the face of technological development, re­moves an important incentive for manufacturers to produce a safe product. Be­yond incentives, however, remains a question of equity, justice, and fairness. Scholars generally acknowledge that the nationwide railroad system developed over the prior two centuries in the shadow of liability reducing rules (primarily centered on a narrow scope given to the proximate cause of an accident). Though the nation and the population as whole benefitted enormously from the development of the transcontinental rail system, the railroad companies (headed by so-called “robber barons”) did not bear many costs associated with this de­velopment and expansion. Rather, the fraction of the public who lived near the path of the railroad tracks bore the brunt of uncompensated losses (not the share­holders, who made immense profits from implementation of rail technology).

Even if one could demonstrate that computer drivers were, on average, safer than human drivers—a result that would benefit the nation and the population as a whole, this fact in no way should absolve a computer driver from liability in an individual accident case in which a human driver would have incurred liability by acting the same way the computer driver did. A very safe human driver may get a reduction in her insurance premium, but she does not get a free pass due to all the crashes she avoided if she later hits and kills a pedestrian due to negligence. General statistics do not influence liability in the individual case. Drivers, either human or computer, should not accumulate free passes on negligent behavior based on their overall statistical driving records.

Put simply, computer drivers should be held to the same standards as human drivers when determining negligence. While this might not ensure that they are safe enough to satisfy the needs and requirements of all relevant stakeholders, deploying habitually negligent computer drivers should not be acceptable to an­yone. Individual acts of negligence should be called to account just as they are for human drivers. Because this standard of behavior is based on the well-practiced process of comparison to a “reasonable man” driver, this will in effect put a floor on how unsafe computer drivers are allowed to be that can be as­sessed by nonspecialist finders of fact. The liability transfer rules presented in this Article provide actionable guidance on how to assess the transfer of liability between human and computer drivers for this purpose.

Manufacturers develop driving automation technology in a more socially conscious environment than was prevalent in the era of railroad development. Proponents of AV technology do not hesitate to emphasize the potential envi­ronmental benefits of deploying automated vehicles, nor the benefits to handi­capped persons and marginalized communities of expanded transportation opportunities. In light of positive and socially conscious goals used as selling points with federal and state legislatures, it would be an odd result indeed if these same companies opposed laws which made the industry bear the true costs of accidents from testing and deployment of driving automation systems.

This Article proposes a legal architecture which ameliorates the manifest shortcomings of the robber baron era for a new transportation system in a by­gone age. The year 2023 proved to be a tumultuous one for the AV industry, including a pedestrian accident which resulted in Cruise suspending its ro­botaxis operations nationwide, followed by a management shake-up. These developments reveal serious potential roadblocks to realization of AV technol­ogy deployed at scale. But regardless of new developments and breakthroughs in technology solutions to address shortcomings, the fundamental nature of hu­man reaction times in response to emergency situations and the reality of auto­mation complacency will not change. The safety concerns identified in this Article with the awkward middle of shared driving responsibilities between computer driver and human driver need a legislative response both to maintain public acceptance of AV technology and as a matter of basic justice and fairness. It remains to be seen whether the AV industry can “walk the socially conscious walk” by taking a different path to develop and deploy its new transportation technology. Such a path requires support for a legislative solution to the chal­lenges posed by new AV technology and does not export the cost of accidents onto innocent bystanders to enhance profits for shareholders.

    Authors