chevron-down Created with Sketch Beta.

Litigation News

2013-2018

Is the Legal Community Ready for Self-Driving Cars?

Angela Foster

Summary

  • By 2020, technology companies and car manufactures project that 10 million self-driving cars will be on the roads.
  • Technology has moved beyond simple features like autonomous braking, crash prevention radars, and lane assistance.
  • Recently, automotive companies have developed self-driving features that provide a car with the capability of driving without any human intervention.
Is the Legal Community Ready for Self-Driving Cars?
AndreyPopov via Getty Images

Jump to:

By 2020, technology companies and car manufactures project that 10 million self-driving cars will be on the roads. Notably technology has moved beyond simple features like autonomous braking, crash prevention radars, and lane assistance to cars that drive themselves with little human oversight.

Recently, automotive companies like Mercedes, BMW, and Tesla have developed self-driving features that provide a car with the capability of driving without any human intervention. The question is no longer whether self-driving cars will become a reality, but whether the legal community is ready for them.

What Is a Self-Driving Car?

A self-driving car is defined as any car with features that allow the car to accelerate, brake, and steer the car's course with limited or no driver interaction. Self-driving cars are categorized as semi-autonomous or fully autonomous. A fully autonomous vehicle can drive from point A to point B and encounter on-road scenarios without needing any interaction from the driver.

The self-driving car's control system is equipped with a GPS unit, an inertial navigation system, and a range of sensors. The control system is capable of making intelligent decisions by maintaining an internal map of the surroundings and using that map to find an optimal path to a destination that avoids obstacles.

Before making navigation decisions, the self-driving vehicle builds a three-dimensional map of its environment and locates itself within the map and its relation to other objects. Once the vehicle determines the best path to take, the information is converted into commands and transferred to the vehicle's actuators which control the vehicle's steering, braking, and throttle. This process is repeated multiple times each second until the self-driving vehicle reaches its destination.

The self-driving vehicle's internal map includes the current and predicted location of all fixed (buildings, traffic lights, stop signs) and moving (other vehicles and pedestrians) obstacles in its vicinity. The vehicle will use the information placed in the vehicle's map to safely direct the vehicle to its destination while avoiding obstacles and following the rules of the road.

Only a few states have passed laws allowing the testing of autonomous cars on the roads; however, no state allows a fully autonomous vehicle on the road. California requires that a human (safety driver) be present inside the vehicle and is capable of taking control in the event of technology failure or other emergency. Google's self-driving car looks like a golf car with doors and is designed to be fully autonomous. The car cannot drive faster than 25 miles per hour and has a steering wheel and pedals so a safety driver can take over.

Was It a Human or Car Accident?

Unlike humans, self-driving cars do not possess instinctive analysis. For example, if a ball rolls across the road in front of your car you will reduce your speed and look for a child that might come running after the ball. The self-driving cars are programmed to follow the letter of the law, which may be problematic when sharing the road with human drivers.

As a Google self-driving car slowed down for a pedestrian, the safety driver manually applied the brakes. The car was hit from behind, sending the safety driver to the emergency room for mild whiplash. Google's test report stated: "While the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and traveled closer to the crosswalk, giving the car behind a little more room to stop."

In contrast, doing nothing could cause an accident when drivers expect or anticipate an action. While trying to drive through a four-way stop, a Google self-driving car was unable to get through because its sensors kept waiting for other (human) drivers to stop completely and let it go. Looking for an advantage to navigate through the four-way stop, the human drivers kept inching forward, paralyzing the self-driving car's robot. In these situations, drivers usually make eye contact with each other to suggest who can go first; there are no eyes in a self-driving car.

In another test, as a self-driving car approached a red light in moderate traffic, the car sensed that a vehicle coming from the other direction was approaching the red light at a higher than safe speed. The self-driving car immediately jerked to the right in case it had to avoid a collision.

Now researchers are questioning whether they should program the self-driving cars to break the law in order to protect themselves, passengers, and other drivers. If so, how much of the law should the self-driving cars break? In other words, how do they make the self-driving cars react to or predict human instincts?

Who Should Be Liable?

As self-driving car technology advances, legislatures prepare for the potential impact of these vehicles on the roads, yet, many legal issues remain unsettled. Currently, six states—California, Florida, Michigan, Nevada, North Dakota, Tennessee and the District of Columbia—have passed legislation related to autonomous vehicles. Florida, Michigan, and District of Columbia statues contain language protecting original manufacturers from liability for defects introduced on the aftermarket by a third party who converts a non-autonomous vehicle into an autonomous vehicle. Additionally, car manufacturers' liability will be limited if an accident or injury occurs related to an autonomously operating vehicle when the car is equipped with aftermarket parts. Thus, the party who installed the autonomous technology is liable.

California, Nevada, North Dakota, and Tennessee statutes are silent on liability. Interestingly, Florida, California, and Nevada laws specify that the person who causes the vehicle's autonomous technology to engage is the operator of the vehicle. Presumably, liability issues regarding who was operating the autonomous vehicle may arise. Additionally, because automated vehicles must be controlled by human drivers, allegations that an accident occurred because the human driver (safety driver) failed to assume control quickly enough are also likely.

In strict liability cases, if a manufacturing defect injures a passenger riding in a car owned by a friend, the injured passenger could file a strict liability claim against the manufacturer. Hypothetically, a driver of a non-autonomous vehicle injured in a collision with an allegedly defective autonomous vehicle could make a manufacturing defect claim against the autonomous vehicle manufacturer. Liability complaints alleging design defects are likely to arise in connection with the shared responsibilities between the vehicle and the human driver.

Despite significant advances with self-driving cars, technological barriers and legal issues must be overcome before self-driving vehicles are safe enough for road use. Designing a car that thinks like a human yet adheres to the strict letter of the law is no simple task, nor is anticipating shared liability of human and autonomous vehicle drivers.

Resources

    Authors