With the invention of autonomous cars still in its infancy, we can barely begin to envision how the widespread use of such vehicles will eventually (inevitably) perturb and expand the tort system as we know it today. Before cars became connected and “intelligent,” the area of vehicle liability was fairly well established—thanks to over a century of case law and oftentimes stringent motor vehicle regulation. But at the risk of sounding like a startup marketing pitch cliché, autonomous technology, the Internet of Things (IoT), and connected vehicle systems will change everything in the field of vehicle liability.
March 01, 2016
IoT, the Internet of Threats? Novel Liability Issues for Connected, Autonomous Vehicles and Intelligent Transportation Systems
Today, if someone drives a car negligently and hurts another, there could be a claims adjustment or tort action against the operator of the vehicle, and the victim could, more likely than not, receive some kind of compensation under a negligence claim. On the other hand, if a collision is due to a mechanical flaw or other defect in the vehicle itself, the manufacturer might be liable under strict liability theory, and major manufacturing defects or clear design flaws can lead to class action lawsuits, providing much relief to any number of affected plaintiffs. But how might these scenarios change when a self-driving car is involved in a collision?
Autonomous Vehicles Raise New Questions
Imagine that a collision occurs and the driver “at fault” is actually the autonomous vehicle itself, and not the car’s owner. Who then is responsible for compensating the victim(s)? The car owner? The car manufacturer? The software company that sold the algorithm that programs the car by telling it how it should drive under various (perhaps subjective) conditions?
It is not too difficult to envision that plaintiffs’ attorneys will be seeking the consulting services of highly educated scientific and mathematical experts. Will plaintiffs’ lawyers hired on a contingency invest in these highly paid, doctorate-holding software coding and artificial intelligence expert witnesses? And if so, which cases would be considered large enough to warrant such an investment? Would an attorney risk the high cost of an algorithm expert for a personal injury case involving only $10,000 in medical bills? What about $50,000?
This is a new area for would-be accident reconstructors who have a firm grasp of computer coding and the intricacies of artificial intelligence as applied to autonomous cars. Note that even with all the new issues raised by a collision caused by a car that drives itself, this does not even begin to address another side of the coin regarding smart cars—they are yet another source of risk for data breaches. After all, you might want to know how many times you go to a particular liquor store, but do you want your spouse’s divorce attorney to know as well?
This article presents a brief outline of the novel questions and burgeoning complexity that this new technology will bring to liability in the automotive industry. It will clearly take quite some time for case law and motor vehicle regulations to catch up to these new challenges.
The Connected Vehicle: Big Data, Big Liability?
Until very recently, most cars were not connected to anything, except perhaps to voice services whereby a vehicle’s occupant could use a hands-free subscription service to call for help (e.g., GM OnStar). However, increasingly vehicles are equipped with full cellular data connectivity and wireless access to the Internet. This allows the car to become a true mobile information device and an IoT object.
The advantages of all this connectedness include vehicle safety feedback for the driver and automaker, hands-free navigation systems, and maps in real time—in essence, a car and computer all rolled into one. With this connectivity, however, come a host of new security and privacy issues that create liability for both the automaker and the related service providers.
Many forms of data can be collected, saved, and transmitted from a vehicle. For example, for a vehicle with GPS capability, location information is the most obvious source of sensitive information. But there are other, less conspicuous sources of data that, on aggregate, can lead to equally sensitive information about a driver or occupant of an automobile.
The connected automobile is capable of collecting all sorts of sensor data, including acceleration, velocity, and braking habits; video and image data from car cameras; driver and occupant identification through recorded voice data; phone numbers dialed (to whom and for how long); contact databases; and even payment and financial information. Such data can be processed using big data analytics to reveal granular profiles of a given driver or occupant. Should this data be breached, its leak or misuse could have huge consequences for the consumer—not to mention legal troubles for the manufacturer and the service company in charge of safekeeping such data.1
In response to some of these concerns, the automotive industry has made some voluntary efforts to ensure that there are unified policies in place to mitigate the risk of a data breach. The Auto Alliance has published a detailed policy statement that outlines the data protection being adhered to (voluntarily) by all major car manufacturers in the U.S. marketplace.2 These policies include data minimization (only recording necessary data), retention policies designed to minimize the amount of historical data being stored per customer, transparency, choice, and data anonymization.
All of these efforts provide a noteworthy starting point for consumer protection, but policies that set time limits for data retention and institute the redacting of personally identifiable information are hardly a panacea for data breach liability. Unfortunately, there are limits to how effective such methods can be in protecting consumers from unwanted inferences or identification. It is well known, for example, that even anonymized data with names and identifying information such as a vehicle identification number (VIN) redacted can be re-identified to a specific consumer using data sources external to car-acquired data, with the assistance of modern big data analytics. For example, a well-cited Carnegie Mellon study showed that a mere three data points, such as a zip code, birthdate, and gender, could be used to identify fully 87 percent of the population.3 Much more can be inferred by probing publicly available data and making use of powerful machine learning algorithms.
Some consumers might harbor a false sense of security because they do not, as a rule, give away personal information (such as their birthdate or gender) on online forums, but how many of us in this modern age can say that we have never received a happy birthday message by text message, Facebook message, or by tweet? There are companies that save this message data in the cloud, determining your birthdate (and other “private” information) by inference. This is the new world of inference in big data analytics: seemingly innocent, anonymous data and online activity can lead to de-anonymization and re-identification.
Balancing Privacy with Safety
The collection of particular sets of data, for instance data pertaining to car safety and maintenance, while possibly sensitive from a privacy perspective, can be useful for legitimate safety and accident-prevention purposes. Therefore, data collection and retention practices must balance breach and liability concerns against the fact that such data has the potential to save lives. Many lives. In fact, simply deciding on a retention and minimization policy without considering the safety benefits of deep data analysis could carry liability problems in and of itself—even greater than liability from data or privacy breach alone.
The Auto Alliance’s privacy measures appear to acknowledge this problem. So while these voluntarily adopted precautions purport to limit data acquisition only to the data that car manufacturers believe to be necessary, and to limit retention to a certain time frame, some aspects of these privacy measures remain ambiguous. For instance, a driver’s behavior data need not fall under this privacy umbrella if the data is “used only for safety, diagnostics, warranty, maintenance, or compliance purposes.”4 Indeed, most data coming from car sensors can be tied to a legitimate safety purpose and still be used to identify an individual and infer very specific characteristics of that driver—characteristics that may have nothing to do with car safety or car maintenance whatsoever. The car manufacturers may in fact have little choice but to continue to collect this data and store it, if only for car maintenance information and safety purposes—but that does not mean this data is safe from being breached by a party with less than noble intentions.
Consider the Takata airbag, GM ignition, or Toyota sudden acceleration incidents. Each of these became the impetus for a multimillion dollar class action lawsuit. Using data analytics, a company could, over time, save and use telematics and sensor data to predict these problems before they cause enormous harm and unnecessary fatalities. How might a retention policy that allows a company to retain this data for a longer period of time influence the ability to predict and prevent such tragedies? Could prolonged data retention enable a company to mitigate such harm, perhaps even save lives? What if a company could save and utilize such data but, instead, its policies err on the side of consumer privacy?
The Restatement (Third) of Torts, while not controlling, offers some insight into how a court might interpret the standard for liability here: “For the purpose of determining whether a product manufacturer has sufficient knowledge to give rise to a duty to warn, the manufacturer is held to the degree of knowledge and skill of an expert.”5 Furthermore:
In their capacity as experts, manufacturers must keep abreast of scientific knowledge, discoveries, and advances, and are presumed to know what is imparted thereby. They must be aware of all current information that may be gleaned from research, adverse reaction reports, scientific literature, and other available methods. This high standard ensures that the public will be protected from dangers as those dangers are discovered.6
It is very likely that available telematics data capabilities, combined with big data analytics, would qualify as scientific knowledge, discovery, current information, research, or “other available methods.” Therefore, according to the Restatement, if it is possible to prevent harm, an auto manufacturer or an original equipment manufacturer (OEM) would have an affirmative duty to collect, retain, and analyze car sensor data if there is some reasonable chance that the data could be used to detect a design defect or safety issue. In fact, manufacturers are considered experts as to the inferences attainable from their collected, available data as far as the reasonable discovery of product safety issues is concerned. If powerful data analytics can be used and are available, they should be used—even if the collection of such data might bring about data breach liability or implicate privacy issues. Thus, companies implementing short retention policies on their large swath of sensor data collected by connected and autonomous vehicles risk sizeable products liability claims for failure to discover defects that were “reasonably discoverable” using today’s readily available (albeit intimidating) data analytics tools and techniques.
Who’s Driving? And . . . Who’s Paying?
Negligence
An oversimplification of a general negligence claim is this: If you hit someone and it is your fault, you (or your insurer) pay. If you get hit and it is not your fault, you get paid. If you and the other driver are both at fault to some degree, you must each compensate for some or all of the damages. And where a driver is uninsured, uninsured motorist insurance usually allows the party of lesser comparative fault to get paid for his or her damages.
But what if the “person” at fault for the accident is not the car owner but the car itself? Who else would be drawn into lawsuits following a vehicle collision? The software company that designed the decision algorithms? The algorithm programmer? The artificial intelligence programmer? Or does duty vicariously attach to the owner/operator by virtue of using the vehicle for transportation on public roadways, even though the owner/operator had no influence on a vehicle’s “bad” decision?
In order to avoid liability, defining the scope of conduct that would be reasonable under a given set of road conditions might now be expanded to include the reasonableness of a decision-making matrix for an autonomous vehicle for those same road conditions. Such a question would necessarily involve complicated expert testimony as to the vehicle’s decision-making process—this would not be cheap, and it would likely be very confusing to jurors. Car makers and their insurers could be drawn into any personal injury suit involving one of their vehicles and would want to vigorously prove that their car made the right decision despite the fact that a collision occurred, and therefore the vehicle (and its algorithm) was not at fault.
Collision cases asking whether an algorithm was reasonably programmed or whether enough safeguards were incorporated into the artificial intelligence machinery could cause expert witness costs to skyrocket, thereby seriously delaying recovery by an injured plaintiff who, for instance, might only be seeking to recover $9,000 in medical bills. Such a system would create a disincentive for lawyers to take on relatively small cases (small to the attorney but significant to the injured party). The required expert testimony could become prohibitively expensive in cases seeking relatively moderate damage awards.
Products Liability
Aside from negligence, car makers do risk tort liability for the performance of autonomous vehicles sounding in strict liability (products liability) when a machine malfunctions or makes an erroneous decision that causes a collision. Unlike negligence, duty and breach of duty do not have to be proven in a products liability case, so this would seem like a more appropriate cause of action for determining whether the actual function of an autonomous vehicle caused a collision. There are three major theories for recovery under products liability—manufacturing defect, design defect, and failure to warn—and each has different applicability to autonomous vehicle liability. Under each of these theories, the plaintiff must prove that the injury caused by the manufacturing or design defect or failure to warn was due to an unreasonably dangerous condition of the product.
Foreseeability of a problem or condition was not traditionally considered relevant under such strict liability actions according to the Restatement (Second) of Torts. However, the Third Restatement has brought in the element of foreseeability,7 and this viewpoint is now making headway in many jurisdictions, thus blurring the lines between negligence and strict liability in the context of autonomous vehicle liability. For example, a plaintiff could proffer a foreseeability argument that a certain road condition was foreseeable by a car with certain sensor capabilities and that a manufacturer should be held strictly liable for not reacting properly to a condition that was detectable by onboard sensors, resulting in a crash. Conversely, a manufacturer might argue that a condition would not be reasonably foreseeable by a machine. This then begs the question: what standard should an intelligent vehicle be held to? Should it be held to the standard of an imperfect human driver who also makes mistakes at a given rate? Should it be better than the most expert human driver in the same conditions? Or should the standard be that if the vehicle possesses the data, then it must make a reasonable action based on that available data?
Manufacturing defects in the context of autonomous vehicles might involve claims that inherently involve software and hardware and their intended use. Hardware malfunctions involving sensors are more easily resolved as to strict liability. We already have sensors and automated systems in vehicles, such as traction control systems and antilock braking, that save lives. Were a particular hardware sensor to have a known design or manufacturing defect that caused a malfunction, it would cause liability to attach if not mitigated.
However, when the culprit is software and the decisions that such software makes, strict liability, especially with a foreseeability element introduced, produces a much murkier environment in which to determine and distribute fault. Software systems are inherently likely to contain bugs and are under constant revision to be made “better” even without regard to bugs.
One oft-cited hypothetical that illustrates one of the moral dilemmas that arises for the OEM is “the bus or the baby” problem. If an autonomous car were to be given only two path possibilities, one of which was to hit a baby crossing a crosswalk, the other to hit a school bus full of kids, which decision is negligent? Or is it a design defect to make a given choice? Or does a given decision made in the software design process create an imputed mental state that rises to knowing or intentional conduct, implicating gross negligence or worse?
What about software bugs that cause collisions? Are these product defects that necessarily implicate strict products liability? Software bugs could perhaps be better dealt with under negligence theory, which allows for foreseeability and some reasonability standards, but then what would be the reasonability standard for programming autonomous vehicles? What is reasonable care for an artificial intelligence algorithm programmer? What if the computer is using its own deep learning algorithms to drive the decision-making process rather than relying on a programmer to determine the decision that the computer makes in a given situation? In such a case, is the car maker strictly liable for what the computer “reasonably” determined on its own based on the information available to “it”? What if a computer “reasonably” fails to identify a ball rolling into a street as a threat and hits the boy following it? Should the computer be held to a higher standard than the human driver who makes the same mistake?
And what if a software update that would mitigate an action becomes available and a user or a vehicle fails to download the update before an accident? Does the failure to update give rise to a negligence or products liability action? What if a certain automobile does not have the ability to upgrade to a certain feature due to the lack of a novel sensor or video capability? Should such a vehicle be allowed to remain on the road?
None of these are easy questions from a legal or moral perspective, and the litigation of such issues will inherently become complicated and expensive, perhaps reaching many different legal conclusions in many different jurisdictions before some reasonable commonality in principles emerges.
GTLA Liability
Intelligent transportation systems will often involve not only autonomous decision making and vehicle-to-vehicle communication to resolve path conflicts and avoid collisions, but also intelligent infrastructure. Traffic signals will communicate their status to oncoming vehicles. Cameras and sensors may offer warning information. Drawbridges will transmit their state, and roads will transmit usage and obstacle information. Construction signs will include radio or Internet transmission of their location and instructions for detours or collision avoidance. Flaggers, police officers, and emergency crews will likely have wearable devices that transmit their location and issue instructions to decrease speed and avoid paths that might bring them into harm’s way. Many of these systems and sensors will become part of the transportation infrastructure and even the roads themselves.
This begs the question as to how liability will be addressed in the context of government tort liability acts (GTLAs) in the many jurisdictions that allow for the government to be sued in negligence for roadway incidents. The GTLA is a statutory construct to allow states to be sued by plaintiffs in tort under negligence theory, using an explicit statutory waiver of a state government’s broad sovereign immunity granted to it under the Eleventh Amendment. The scope of the waivers and the language specifically defining the exceptions differ from state to state and therefore produce sometimes dramatically different results. Liability is generally more clear in the case of government-owned vehicles that cause accidents, but when the infrastructure itself is the cause of an accident, legal responsibility and suspension of immunity under GTLA statutes is less clear cut and would lead to completely different legal results in different jurisdictions under current law.
What would a state’s liability be for an intelligent traffic light that erroneously broadcasts the incorrect status to an autonomous vehicle? We already have seen case law develop to define whether liability would attach in the case of malfunctioning analog signals that is quite illuminating as to the jurisdictional differences. In Michigan, for example, an appellate court held that defective light poles, traffic signals, and signs are not part of the “highway.”8 This ruling was based on the Michigan GTLA language that limits the highway exception to negligent maintenance of public highways.9
Contrast this with Tennessee’s GTLA highway exception that allows suit for claims arising from negligently constructing or maintaining streets, alleys, or sidewalks and from the negligent construction or maintenance of public improvements thereon, “includ[ing] traffic control devices.”10 Other states have additional subtle and not-so-subtle language differences in their GTLAs, with subsequent common law interpretations thereof, that make the liability of government-owned intelligent infrastructure that fails to operate safely and causes damage highly divergent based on jurisdiction.
With So Much Uncertainty, Why Bother?
It is quite obvious that there are many liability issues yet to be settled with the advent of this new transportation technology. This creates a huge difficulty for car manufacturing companies and the lawyers who advise them to make informed risk-reward business decisions about the introduction of these technologies to the market. However, as has been the case with technologies such as automatic brake systems and traction control, the use of computer automated control or semi-control in automobiles can and has saved many lives. Autonomous driving would undoubtedly have the same effect. Unlike my 17-year-old (who I feel is a very good and safe driver among his oft-texting millennial driver peers), cars do not have a need to listen to loud music, text their friends, talk with the occupants of the cars, daydream, or argue with a significant other on the phone while driving—let alone drink and drive. Autonomous cars will not likely be programmed to disobey traffic signals or exceed the speed limit. They will also have the ability to detect and react to threats much more quickly than humans could. All of these lifesaving features do need to be introduced in society in order to save lives. But what if the liability for personal injury and products liability lawsuits serve to disincentivize or delay the launch of these products to the market? That would translate into unnecessary traffic deaths.
It is therefore necessary to begin to envision alternatives to tort liability to compensate crash victims for their injuries. In the absence of some form of protection from legal liability, car makers may be hesitant to deliver lifesaving autonomous technologies to the market.
The National Childhood Vaccine Injury Act (NCVIA) is one informative model of government-administered personal injury compensation created to promote industry participation in the market for lifesaving technology that also faced litigation barriers to market entry. In the 1980s, the number of lawsuits brought against vaccine manufacturers increased dramatically, and manufacturers made large payouts to individuals and families claiming vaccine injury. Due to increasing litigation, mounting legal fees, and large jury rewards, many pharmaceutical companies left the vaccine business. By the end of 1984, only one U.S. company still manufactured the critical DPT vaccine, and other vaccines were losing manufacturers as well. In October 1986, the U.S. Congress responded to the precarious situation in the vaccine market by passing the NCVIA. Under the act, those claiming a vaccine injury from a covered vaccine cannot sue a vaccine manufacturer without first filing a claim with the U.S. Court of Federal Claims. The vaccine market has stabilized since the passage of the NCVIA. In the United States, six manufacturers supply most of the standard childhood and adult vaccines, and a handful of smaller companies and organizations supply other, less commonly used vaccines. A similar mechanism may be necessary or helpful to incentivize manufacturers to bring autonomous vehicles to market.
Autonomous “intelligent” vehicles will obviously face products liability issues, just like their less intelligent motor vehicle ancestors. However, current models of assigning fault and distributing liability for cars operated by humans are inadequate and cannot be easily applied in cases where the driver is the car itself. Onerous products liability lawsuits could cause manufacturers to delay or stop the introduction of safer-than-human transportation technologies. This could cause unnecessary vehicular death. We need to therefore consider alternative models for just compensation while maintaining an incentive for manufacturers to produce these potentially lifesaving, life-enhancing, and environmentally friendly vehicles. u
Endnotes
1. One need only look to recent judgments and settlements in high-profile breaches to understand the gravity of the situation from a corporate liability perspective.
2. Alliance of Auto. Mfrs., Inc. & Ass’n of Global Automakers, Inc., Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services (2014) [hereinafter Auto Alliance Principles], available at http://www.autoalliance.org/auto-issues/automotive-privacy/principles.
3. Latanya Sweeney, Simple Demographics Often Identify People Uniquely (Carnegie Mellon Univ., Data Privacy Working Paper No. 3, 2000), available at http:// dataprivacylab.org/projects/identifiability/paper1.pdf.
4. Auto Alliance Principles, supra note 2, at 5.
5. American Law of Products Liability 3d (emphasis added) (citing Restatement (Third) of Torts: Prods. Liab.).
6. Id. (emphasis added).
7. Compare Restatement (Second) of Torts § 402A, with Restatement (Third) of Torts: Prods. Liab. § 2.
8. Weaver v. City of Detroit, 651 N.W.2d 482, 486 (Mich. Ct. App. 2002).
9. Mich. Comp. Laws § 691.1402.
10. Tenn. Code Ann. §§ 29-20-202 to -205 (emphasis added).