chevron-down Created with Sketch Beta.
June 01, 2018

Why Smart Car Safety Depends on Cybersecurity

By Roland L. Trope and Thomas J. Smedinghoff

For autonomous cars to be safe, they must communicate with one another (especially about their respective position, speed, course, and intended maneuvers or turns). Their communications must be two-way. But that makes them and their onboard operating computers vulnerable to hackers. And if the hackers take control of the vehicle’s safety-critical systems—acceleration, steering, and braking—they can cause the vehicle to misbehave and crash. In this article, we explain why designers and makers of autonomous cars and other smart cars will not achieve vehicle safety if they fail to achieve vehicle cybersecurity. --------------

Over the preceding decade, car makers have increasingly equipped their vehicles with capabilities to transmit and receive information, phasing in what have come to be called “connected cars”1 or “smart cars.”2 In the process, hundreds of onboard computer-operated controls have replaced mechanical systems to control a car’s operating features and, most importantly, its “safety critical” systems (such as drivetrain, propulsion, steering, and braking). Many of those onboard control systems engage in two-way communications over cellular systems, Wi-Fi, and the Internet.

Connected cars communicate telemetric data, infotainment content, and intra-vehicular safety information and are now the dominant passenger vehicle on the highway.3 On the horizon, the computer control and communication capabilities of connected cars will soon be enhanced with vehicle-to-vehicle communications and computers “at the wheel” in fully autonomous cars. At the same time, as we progress from connected to autonomous cars, the reliance on software and digital communications capabilities will increase exponentially, and, as a result, car safety will become ever more dependent on car cybersecurity.

We refer to all software-enabled vehicles capable of computer-based two-way communications—i.e., both connected cars and autonomous cars—as “smart cars.” But the term “smart cars” should be used advisedly. While it implies added safety, invites trust in the software and the communication capabilities of the onboard computers, and offers the promise of artificial intelligence enhancements, the reality may be very different. The term “smart cars” omits any hint of the inherent cyber vulnerabilities that accompany these cars’ ever-expanding reliance on software and two-way communicating computer systems and ignores the prodigious hazards to vehicle occupants and pedestrians that hackers may create if they remotely gain control of a car’s safety-critical systems.

At this crucial stage in the development of smart cars, consumers have started to ask whether they deserve to be trusted. Key questions include

  • To what extent is the safety of smart cars threatened by vulnerabilities to cyber intrusions?
  • Will the transition from connected cars to autonomous cars magnify the cyber risks to smart cars?
  • Will government regulations coax automotive makers into improving car safety by improving smart car cybersecurity?

Cyber Vulnerabilities of Connected Smart Cars

Connected cars contain more than 100 embedded and interconnected computerized Electronic Control Units (ECUs).4 ECUs operate a connected car’s key features: powertrain (e.g., engine, transmission, drive-shaft) and chassis control (including steering, brakes, airbag, windshield wipers), as well as infotainment systems (e.g., navigation, telephone, entertainment) and telematics (e.g., crash reporting and emergency warning). Many of these ECUs engage in two-way communications (via USB, Bluetooth, Wi-Fi, the Internet, or cell-phone systems) via a communication interface. Each of those ECUs, as well as their communication interfaces, may contain cyber vulnerabilities and thus opportunities for hackers to gain unauthorized access and cause harm.

These potential vulnerabilities are enhanced by the fact that the ECUs of a connected car are not isolated, but interconnected with one another by one or more digital data buses:

To facilitate communication among multiple ECUs without the need for complicated and extensive wiring systems, automakers began locating ECUs on in-vehicle communication networks, commonly referred to as buses or bus systems. According to NHTSA [the National Highway Transportation Safety Administration], the controller area network (CAN) . . . has become the most commonly used in-vehicle communication network or bus; . . . [A]utomakers may locate all ECUs on a single in-vehicle network or include one network to support safety-critical vehicle functions, such as steering and braking, and another network to support convenience and entertainment systems.5

This connectivity of ECUs creates potential vulnerabilities at the communications interfaces with the vehicle, the interfaces with the ECUs, and the data buses.

The ever-expanding size and complexity of the software code in the ECUs and data bus protocols further multiply the potential for errors and related vulnerabilities. If “bad actors” hack into the CAN bus, they may gain opportunities to find onboard vulnerabilities in the vast, complex lines of software code required to operate the vehicle’s safety-critical ECUs. As noted in a Government Accountability Office (GAO) report, “As the lines of software code in vehicles increases[sic], so does the potential for software errors, such as coding errors, and related vulnerabilities”6 that hackers can find and exploit. Figure 1 below, from that GAO report, illustrates the prodigious quantity of software code in modern vehicles.7

Each vulnerability creates a potential entry point or “attack surface”—an opportunity for hackers, by direct or remote means, to gain unauthorized access to a vehicle’s ECUs and data buses. Having compromised a vehicle’s data buses, hackers may possibly gain access to, and take control of, the vehicle’s safety-critical systems, allowing them to cause those systems to misbehave and pose serious risks to multiple vehicles, passengers, and bystanders.

Most worrisome is the access hackers could gain through long-range (over 1 kilometer) wireless interfaces, such as “cellular connections on the telematics unit” that sends data about the car to the manufacturer. As the GAO explains, “Through such interfaces, the cyber attacker could . . . exploit vulnerabilities to access the target vehicles from anywhere in the world and take control over the vehicles’ safety-critical systems.”8

Figure 2, on the next page, depicts the key communications interfaces that a vehicle cyber intrusion could exploit.9

Moreover, as noted by the European Union Agency for Network and Information Security, the most significant cybersecurity challenge for “smart cars”

resides . . . in the security of car components and aftermarket products, where security functions have to be implemented in spite of several kinds of limitations: for example, security requirements may conflict with safety requirements. Furthermore, the very large number of interfaces to secure may lead to planning and cost issues.10

Manufacturers that did not proactively address cyber vulnerabilities have begun to bear the costs of recalls and software patches when third-party researchers demonstrate that “bad actors” could exploit the vulnerability to access and gain control of safety-critical systems. Examples include

  • A recall of 1.4 million vehicles11 after researchers, from ten miles away, wirelessly exploited a “vulnerability in Chrysler’s Uconnect dashboard computers,” and, while the Jeep Cherokee was being driven, gained control of “dashboard functions, steering, transmission and brakes.” The researchers cut the transmission in one episode, and in another they “cut the Jeep’s brakes,” causing it to slide “uncontrollably into a ditch.”12 The recall ultimately affected multiple models, including Dodge Vipers, Dodge Rams, Jeep Grand Cherokees and Cherokee SUVs, Dodge Durango SUVs, and Dodge Challenger sports coupes.13
  • Software patches issued to 2.2 million vehicles after a German motoring association reverse engineered BMW telematics software, imitated BMW servers, and sent “remote unlocking instructions to vehicles” that hacked the BMW ConnectedDrive and unlocked the cars. The hack exploited a feature that “allows drivers . . . to request remote unlocking of their car from a BMW assistance line.”14
  • Software patches distributed “to every Tesla Model S on the road” after researchers demonstrated they could “make a malicious web page” and if a Tesla driver accessed the site a “bad actor” could remotely hack a Tesla Model S’s infotainment system to start or cut the engine.15

As connected cars evolve into autonomous cars, they increasingly will be equipped with safety features that rely on the addition of new software and two-way communication capabilities. Paradoxically, that will add cyber vulnerabilities to the vehicle. In particular, the addition of vehicle-to-vehicle communication capabilities will probably make vehicle safety significantly more dependent on vehicle cybersecurity.

Cyber Vulnerabilities of Autonomous Smart Cars

The deployment of autonomous cars appears a near-certainty, although it probably will involve a gradual transition over several model years. The pace at which that transition occurs may well depend on whether consumers ultimately conclude that they and their families will be far safer—or far less safe—if they turn the wheel over to a smart car’s computers, or if they are willing to risk driving on highways where smart cars (aided by artificial intelligence) are “learning” to drive and to make “safety-critical” decisions while driving.

The sources of potential cyber vulnerabilities in autonomous cars include not only the communications interfaces through which hackers may gain unauthorized access and cause mischief, but also failures to design the systems to handle every conceivable scenario and unintentional coding errors in the millions of lines of code required. Finding and correcting all of those problems is almost impossible. As a recent RAND study concluded:

[A]utonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads. . . .

[T]hird-party testers cannot drive their way to safety. . . . Uncertainty will remain.16

It may not matter to consumers that smart cars autonomously driven may outperform human drivers because “autonomous vehicles are never drunk, distracted, or tired” and those human failings “are involved in 41 percent, 10 percent, and 2.5 percent of all fatal crashes, respectively.”17 It also may not be persuasive to consumers that “more than 90 percent of crashes are caused by human errors . . . such as driving too fast and misjudging other drivers’ behaviors. . . .”18 It is human nature. Humans do not trust things that they do not comprehend. And most humans sense that they do not understand computer-based machines, especially those doing things humans long believed could be done solely by humans and that may define what it means to be human. Consumers therefore may assign heavier weights to vehicle misbehavior attributable to cars driven by computers, artificial intelligence, and robotics. While the exact cause is unclear, initial reports of an Uber self-driving Volvo XC90 SUV that struck and killed a pedestrian as she walked her bike, at night, “across the street, outside the crosswalk”19 only serve to emphasize these concerns. Consumers similarly may assign heavier weights if hacker-controlled smart cars cause a multiple-car collision and fatalities. The makers of smart cars and of the cars’ onboard, two-way connected “smarts” will have to prove that their technology—“although definitely still learning and maturing—doesn’t amount to flooding the nation’s roadways with dangerously adolescent robot drivers.”20 As the transition from connected car to fully autonomous vehicle proceeds and consumer assessments emerge, connected car communication technologies and their cyber risks will remain critical to the safety achieved with smart cars.

In apparent anticipation of the deployment of autonomous cars, the National Highway Transportation Safety Administration (NHTSA) concluded that current safety offerings in connected cars do not provide a means to reduce the number of multivehicle crashes.21 Automotive accident fatalities in the United States exceeded 40,000 in 201622 and again in 2017,23 a threshold not reached since 2007.24 It appears that the benefits of new safety features in connected cars may have been outweighed by the advent of devices that tempt drivers to be dangerously distracted (e.g., devices such as onboard displays of GPS maps, infotainment systems, texting-enabled mobile phones).

Thus, NHTSA believes that adding vehicle-to-vehicle (V2V) communication technologies will reduce multivehicle crashes by enabling vehicles to convey “safety information about themselves to other vehicles” and to prevent “impending intersection crashes.”25 Accordingly, in January 2017, NHTSA proposed a rule that, if adopted,26 would require all new light vehicles (and thus all smart cars) “be capable of V2V communications, such that they will send and receive Basic Safety Messages to and from other vehicles.”27 Upon receiving such messages, the car’s V2V system would determine whether to warn its driver of an “imminent crash situation”28 and/or to intervene to avert it. NHTSA reasons that the proposed V2V standard could reduce vehicle accidents and fatalities by helping drivers (including autonomous “drivers”) avoid crashes where more than one vehicle is involved and the V2V communication is able to warn all vehicles potentially involved of the imminent hazard(s).29

But while adding V2V communications to every smart car could help reduce accidents, V2V implementation will necessitate adding new software and new access points and communication interfaces to the onboard ECUs and data buses. Thus, the cyber vulnerabilities of each smart car will multiply accordingly—a fact that NHTSA also recognizes.

Consider, for example, the Basic Safety Messages or “BSMs” that V2V systems must continuously send to and receive from the other cars on the road. Security measures must be implemented to ensure that each message is authentic, unmodified, and properly attributed to the appropriate vehicle. Otherwise BSMs modified, corrupted, or spoofed by “bad actors” could subvert V2V communications and turn them into serious hazards for occupants of multiple vehicles relying on their accuracy to make manual or automatic crash avoidance decisions.

The magnitude of those risks grows when one considers that the safety of high-speed driving often depends on split-second decisions. NHTSA’s proposed V2V standard would require a minimum send-and-receive range at 300 meters (approximately 984 feet). BSMs, received at that range, would afford two vehicles approaching each other head-on about 5.6 seconds for their respective onboard applications “to detect the crash scenario and issue a warning.” Neither human nor autonomous drivers might react well to such time constraints when trying to determine whether an alarm sounded by a V2V system should be trusted and heeded or mistrusted and disregarded.

If “bad actors” manage to compromise the security credentials or processes required for sender and message authentication, they may be able to construct and transmit malicious BSMs that would appear valid to the receiving vehicle because such messages would appear to be “using actual credentials given to a trusted device.”30 In such instances, the receiving vehicle’s V2V system might issue false alarms to steering, propulsion, and braking ECUs that conflict with information those ECUs continue to receive from other onboard sensors. If the drivers, whether human or autonomous, are unable to interpret the conflicting information in a split second, they may default to hitting the brakes and attempting an avoidance maneuver that brings their respective cars into multiple-vehicle collisions.

Hacking could pose serious challenges to the decision making that car makers might have programmed into their smart cars in anticipation of the addition of a V2V communication system with its capabilities for issuing warnings. Whether the car maker will have anticipated the situation created by a hacker remains a significant “known unknown.” Whatever gains in safety V2V systems might offer, their reliance on two-way communications also adds significantly to a smart car’s cyber vulnerabilities and risks. The risks should not be underestimated because, as GAO explains, the CAN bus “has become the most commonly used in-vehicle network that facilitates communication among ECUs,” and the “one mitigation option—message authentication and encryption—cannot be easily incorporated onto CAN buses, as CAN does not provide sufficient bandwidth to host these protections.”31

NHTSA’s Enforcement Guidance

In September 2016, NHTSA published as a final rule its Enforcement Guidance Bulletin on Safety-Related Defects and Automated Safety Technologies (Enforcement Guidance) that regulates, among other safety hazards, the cyber risks of two-way communicating devices and onboard software in smart cars.32 The Enforcement Guidance emphasizes that “if software . . . creates or introduces an unreasonable safety risk to motor vehicle systems [including its ‘critical systems’ such as braking, steering, or acceleration], then that safety risk constitutes a defect compelling a recall.”33

Moreover, NHTSA adopts the position that a defect compelling a recall may exist in software contained in a motor vehicle or any of its onboard devices without evidence of a major system performance failure.34 If any of the smart car’s onboard devices, such as an IoT infotainment unit or a V2V communication system, contains vulnerable software, NHTSA would view that as a “defect.” And if such defect might give hackers access to safety-critical systems, then the defect might be viewed by NHTSA as a “defect compelling a recall.”

Makers of smart cars and their onboard software, ECUs, and two-way communication devices face a challenging paradox: To enhance safety, they may need to add features such as a V2V communication system and its related software; but adding such capabilities may substantially increase a smart car’s cyber vulnerability. Discovering all such cyber vulnerabilities before the hackers can exploit them is, of course, a very difficult task. But even if discovered, remediating them is also problematic:

[T]here will be situations when a security vulnerability may be known to NHTSA and manufacturers but not all V2V-equipped vehicles will have installed the patches or updates to mitigate the flaw. During this period, vehicles in the fleet may be vulnerable until the patch or update is installed.35

As such concerns demonstrate, smart car safety will be increasingly dependent on how well smart car makers understand, anticipate, and address vehicle cybersecurity—and how well they handle it throughout the life cycle of smart cars. 

Endnotes

1. See Gov’t Accountability Office, GAO-17-656, Vehicle Data Privacy 10 (July 2017), available at https://www.gao.gov/assets/690/686284.pdf.

2. See European Union Agency for Network & Info. Secur. (ENISA), Cybersecurity and Resilience of Smart Cars, 6 (Dec. 2016), available at https://www.enisa.europa.eu/publications/cyber-security-and-resilience-of-smart-cars.

3. As a recent GAO report notes: “Nearly all selected automakers offer connected vehicles or plan to offer them in the next 5 or more years. Specifically, 13 of the 16 automakers we interviewed sell new vehicles that met our definition of a connected vehicle—ones that come equipped with technologies and services that transmit and receive data wirelessly….” GAO-17-656, supra note 1, at 10.

4. ENISA, supra note 2, at 49, available at https://www.gao.gov/assets/680/676064.pdf.

5. Gov’t Accountability Office, GAO-16-350, Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, But DOT Needs to Define Its Role in Responding to a Real-World Attack (Mar. 2016) 7–9, available at https://www.gao.gov/assets/680/676064.pdf.

6. Id., at 8–9.

7. Id., at 9, fig. 2.

8. Id., at 13–14.

9. Id., at 14, fig. 3.

10.ENISA, supra note 2, at 6.

11. Andy Greenberg, After Jeep Hack, Chrysler Recalls 1.4M Vehicles for Bug Fix, Wired (July 24, 2015), https://www.wired.com/2015/07/jeep-hack-chrysler-recalls-1-4m-vehicles-bug-fix.

12. Andy Greenberg, Hackers Remotely Kill a Jeep on the Highway—With Me in It, Wired (July 21, 2015), https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway.

13. Chris Matthews, Jeep Hack: Fiat Recalls 1.4 Million Vehicles for Software Fix, Fortune (July 24, 2015), http://fortune.com/2015/07/24/jeep-cherokee-recall.

14. Martyn Williams, BMW Cars Found Vulnerable in Connected Drive Hack, PCWorld (Jan. 30, 2015), https://www.pcworld.com/article/2878437/bmw-cars-found-vulnerable-in-connected-drive-hack.html.

15. Kim Zetter, Researchers Hacked a Model S, But Tesla’s Already Released a Patch, Wired (Aug. 6, 2015), https://www.wired.com/2015/08/researchers-hacked-model-s-teslas-already.

16. Nidhi Kalra & Susan M. Paddock, RAND Corp., Driving to Safety—How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? 10 (2016), available at https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf.

17. Id., at p. 1.

18. Id.

19. Philip E. Ross, Uber Robocar Kills Pedestrian, Despite Presence of Safety Driver, IEEE Spectrum (Mar. 25, 2018), available at https://spectrum.ieee.org/cars-that-think/transportation/self-driving/uber-robocar-kills-pedestrian-despite-presence-of-safety-driver. It might not matter that a subsequent update suggests that the pedestrian “darted out in front of the car too quickly for either the car or the safety driver to react.” Id.

20. Jeremy Hsu, When It Comes to Safety, Autonomous Cars Are Still “Teen Drivers,” Scientific American (Jan. 18, 2017), available at https://www.scientificamerican.com/article/when-it-comes-to-safety-autonomous-cars-are-still-teen-drivers1.

21. Nat’l Highway Traffic Safety Admin., Federal Motor Vehicle Safety Standards: V2V Communications, 82 Fed. Reg. 3854, 3878 (Jan. 12, 2017), available at https://www.gpo.gov/fdsys/pkg/FR-2017-01-12/pdf/2016-31059.pdf.

22. Neal E. Boudette, U.S. Traffic Deaths Rise for a Second Straight Year, N.Y. Times (Feb. 15, 2018), available at https://www.nytimes.com/2017/02/15/business/highway-traffic-safety.html.

23. Motor Vehicle Deaths in U.S. Again Top 40,000, Ins. J. (Feb. 16, 2018), available at https://www.insurancejournal.com/news/national/2018/02/16/480956.htm.

24. Id.

25. NHTSA, V2V Communications, 82 Fed. Reg. at 3855.

26. NHTSA plans a phased-in adoption:

The agency is proposing that the effective date for manufacturers to begin implementing these new requirements would be two model years after the final rule is adopted, with a three year phase-in period to accommodate vehicle manufacturers’ product cycles. Assuming a final rule is issued in 2019, this would mean that the phase-in period would begin in 2021, and all vehicles subject to that final rule would be required to comply in 2023.

Id. at 3857.

27. Id. at 3855 (emphasis added).

28. Id. at 3879

29. Of the 5.5 million police-reported crashes annually in the U.S. between 2010 and 2013, “3.8 million (69 percent of all crashes) were multi-vehicle crashes.” Id. at 3880.

30. Id. at 3916

31. GAO-16-350, supra note 5, at 25.

32. NHTSA Enforcement Guidance Bulletin 2016-02: Safety-Related Defects and Automated Safety Technologies, 81 Fed. Reg. 65,705, 65,705–06 (Sept. 23, 2016).

33. Id. (emphasis added).

34. 81 Fed. Reg. at 65,708 (emphasis added).

35. NHTSA, V2V Communications, 82 Fed. Reg. at 3919.

By Roland L. Trope and Thomas J. Smedinghoff

Roland Trope ([email protected]) is a partner in Trope and Schramm LLP in its New York City offices and an adjunct professor in the Departments of Law and of Electrical Engineering and Computer Science at the U.S. Military Academy at West Point. Thomas J. Smedinghoff ([email protected]) is of counsel in the Privacy & Cybersecurity Practice Group at the law firm of Locke Lord LLP and is a member of the ABA Cybersecurity Legal Task Force. The views expressed herein are solely those of the authors and have not been approved by, and should not be attributed to, the U.S. Military Academy, U.S. Army, U.S. Department of Defense, or U.S. government.