Was It a Human or Car Accident?
Unlike humans, self-driving cars do not possess instinctive analysis. For example, if a ball rolls across the road in front of your car you will reduce your speed and look for a child that might come running after the ball. The self-driving cars are programmed to follow the letter of the law, which may be problematic when sharing the road with human drivers.
As a Google self-driving car slowed down for a pedestrian, the safety driver manually applied the brakes. The car was hit from behind, sending the safety driver to the emergency room for mild whiplash. Google's test report stated: "While the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and traveled closer to the crosswalk, giving the car behind a little more room to stop."
In contrast, doing nothing could cause an accident when drivers expect or anticipate an action. While trying to drive through a four-way stop, a Google self-driving car was unable to get through because its sensors kept waiting for other (human) drivers to stop completely and let it go. Looking for an advantage to navigate through the four-way stop, the human drivers kept inching forward, paralyzing the self-driving car's robot. In these situations, drivers usually make eye contact with each other to suggest who can go first; there are no eyes in a self-driving car.
In another test, as a self-driving car approached a red light in moderate traffic, the car sensed that a vehicle coming from the other direction was approaching the red light at a higher than safe speed. The self-driving car immediately jerked to the right in case it had to avoid a collision.
Now researchers are questioning whether they should program the self-driving cars to break the law in order to protect themselves, passengers, and other drivers. If so, how much of the law should the self-driving cars break? In other words, how do they make the self-driving cars react to or predict human instincts?
Who Should Be Liable?
As self-driving car technology advances, legislatures prepare for the potential impact of these vehicles on the roads, yet, many legal issues remain unsettled. Currently, six states—California, Florida, Michigan, Nevada, North Dakota, Tennessee and the District of Columbia—have passed legislation related to autonomous vehicles. Florida, Michigan, and District of Columbia statues contain language protecting original manufacturers from liability for defects introduced on the aftermarket by a third party who converts a non-autonomous vehicle into an autonomous vehicle. Additionally, car manufacturers' liability will be limited if an accident or injury occurs related to an autonomously operating vehicle when the car is equipped with aftermarket parts. Thus, the party who installed the autonomous technology is liable.
California, Nevada, North Dakota, and Tennessee statutes are silent on liability. Interestingly, Florida, California, and Nevada laws specify that the person who causes the vehicle's autonomous technology to engage is the operator of the vehicle. Presumably, liability issues regarding who was operating the autonomous vehicle may arise. Additionally, because automated vehicles must be controlled by human drivers, allegations that an accident occurred because the human driver (safety driver) failed to assume control quickly enough are also likely.
In strict liability cases, if a manufacturing defect injures a passenger riding in a car owned by a friend, the injured passenger could file a strict liability claim against the manufacturer. Hypothetically, a driver of a non-autonomous vehicle injured in a collision with an allegedly defective autonomous vehicle could make a manufacturing defect claim against the autonomous vehicle manufacturer. Liability complaints alleging design defects are likely to arise in connection with the shared responsibilities between the vehicle and the human driver.
Despite significant advances with self-driving cars, technological barriers and legal issues must be overcome before self-driving vehicles are safe enough for road use. Designing a car that thinks like a human yet adheres to the strict letter of the law is no simple task, nor is anticipating shared liability of human and autonomous vehicle drivers.