November 17, 2014

Net Gets Physical: What You Need to Know About the Internet of Things

John A. Rothchild

The Internet of Things (IoT) has become a hot topic in venues as disparate as consumer electronics shows and government regulatory proceedings. The topic is a big one, and is poised to get much bigger. This article will introduce the IoT and address some of the legal issues that will accompany its growth. 

There is no universally accepted definition of the term “Internet of Things.” It is generally used to refer to the collection of physical objects that are linked to each other, and to users, through the Internet or other computer networks. Since computers themselves are not usually thought of as belonging to the IoT, it might be more accurate to limit the scope of the IoT to the, still-fuzzy, “things that didn’t used to be connected to the Internet, but now are.” Examples of objects that belong to the IoT include: 

  • A home security system that can be monitored and controlled via the Internet
  • A sensor in your basement that sends you a message when it detects flooding
  • An oven that can be controlled using your smartphone
  • A bathroom scale that sends your weight readings via the network to your doctor
  • Implanted medical devices, like pacemakers and insulin pumps, that can be controlled wirelessly
  • Wearable devices that count the number of steps you take and display the information on a website
  • A dog collar that lets you geolocate your pet
  • Smart electrical metering systems that transmit your usage information to the utility
  • Home thermostats can be controlled via a mobile device and that compile and display your usage
  • Automobiles with telematics systems that allow them to be monitored and controlled remotely
  • RFID tags attached to pallets holding a vendor’s inventory
  • Sensors on jet engines that relay performance information to a monitoring system 

There have long been non-computer “things” connected to the Internet. Technology archaeologists trace the IoT back to a Coca-Cola vending machine at the Carnegie Mellon University Computer Science Department. In 1982, grad students wired it up to the university’s network so that users of the machine could check from their desks whether it was empty and whether the bottles were cold. Commercial and industrial applications have also been common for some time. For example, in 2005, Wal-Mart began mandating that its suppliers place RFID tags on shipping pallets to facilitate inventory control. But the IoT has burst into popular consciousness only in the past year or two. At the Consumer Electronics Show in January 2014, IoT devices got what one report called “the lion’s share of the spotlight,” featuring, among many other curiosities, a connected toothbrush. In the same month Google announced that it would purchase Nest Labs, maker of the eponymous Internet-connected home thermostat, for a stunning $3.2 billion. In November 2013, the Federal Trade Commission (FTC) picked up on the trend with a day-long workshop titled “Internet of Things: Privacy and Security in a Connected World.” 

If projections are anywhere near accurate, the IoT is going to get much, much bigger. In December 2013, the technology research firm Gartner predicted that the number of IoT devices will grow to 26 billion in 2020 (from 900 million in 2009), with associated revenues exceeding $300 billion – far outstripping the predicted 7.3 billion personal computers, tablets, and smartphones. Market research firm IDC says the market of goods and services surrounding the IoT will reach an astounding $8.9 trillion in 2020, which is about the current GDP of China. 

The omnipresent IoT will give rise to a whole range of legal issues. The following discussion will address two of them: (1) privacy and security, and (2) remote disablement of connected devices. 

Privacy and Security

Privacy and security issues are inevitable, arising from the facts that (1) objects on the IoT transmit information via public data networks, (2) the information these objects gather is often transmitted to third parties, (3) users of the devices are often unaware of how their information is used by those recipients, (4) privacy and security considerations have not been of central concern in the design of IoT devices, and (5) some IoT devices are designed to spy on users. 

1. Transmission of sensitive data via public networks. In many applications, data collected by IoT devices travels from the device via a wired (HomePlug) or wireless (Wi-Fi, Bluetooth, near-field communication, radio-frequency identification) connection to the Internet, which then routes the information to its intended recipient. The Internet is not an inherently secure medium for the transmission of information, and becomes secure only with the addition of encryption-based systems to prevent unauthorized access. But connected IoT devices do not always incorporate adequate security. 

An example of such a lapse occurred with a wireless camera system sold by a company called TRENDnet, Inc. The cameras, designed for use in homes and small businesses and (ironically as it turned out) sold under the SecurView brand name, connected to the Internet via Wi-Fi wireless networking. The system was designed so that the camera user could monitor the live video and audio feeds via the Internet. According to the FTC, a flaw in the camera’s software allowed a hacker to discover IP addresses that allowed the feeds to be monitored. The hacker posted this information online, allowing anyone to monitor feeds from nearly 700 of the cameras. The FTC noted that “these compromised live feeds displayed private areas of users’ homes and allowed the unauthorized surveillance of infants sleeping in their cribs, young children playing, and adults engaging in typical daily activities. The breach was widely reported in news articles online, many of which featured photos taken from the compromised live feeds or hyperlinks to access such feeds.” The order settling the case requires TRENDnet to establish and maintain “a comprehensive security program.” In re TRENDnet, Inc., FTC Docket No. C-4426 (2014) (consent order). In its press release, the FTC described the case as “the agency’s first action against a marketer of an everyday product with interconnectivity to the Internet and other mobile devices – commonly referred to as the ‘Internet of Things.’” It will not be the last. 

2. Transmission of sensitive data to third parties. Privacy risks also arise from the fact that data collected by IoT devices is often routed to third parties. For example, the energy usage of a household that has a smart metering system is collected by the utility. Patterns of electricity usage are surprisingly revealing of the activities that give rise to that usage. Law enforcement agencies have for some time made use of household monthly electricity bills to identify houses in which marijuana plants are being grown; the grow lights consume lots of electricity. Much more detailed information about household activities can be derived from the usage patterns collected by smart meters. Usage profiles can be mined to determine at what times a washing machine, a toaster, or a kettle is operated. According to a report from NIST, data collected from a residential smart electricity meter can reveal when the occupants sleep, eat, shower, and watch television; the residents’ work schedule, sleeping patterns, and other lifestyle habits; how many people are living at the house; whether anybody is home; and where they are located in the house. Revealing data may also be collected via plug-in electric vehicle charging stations, including the location of the vehicle each time it is charged. Information of this sort may be of interest to insurance companies, marketers, law enforcement authorities, private litigants, landlords, creditors, the press, and criminals. See National Institute of Standards and Technology, Guidelines for Smart Grid Cyber Security: Vol. 2, Privacy and the Smart Grid (2010). Plans to network smart appliances via home area networks will provide utilities (and, in principle, consumers) with still more detailed information about home electricity usage. 

Data from use of electrical appliances may also be routed through the appliance manufacturer. A representative of General Electric explained at the 2013 FTC hearing that the data it collects from connected residential ovens and other electrical appliances is currently not exploited, but in the future, GE may decide to use it for marketing purposes. 

There is presently little effective legal protection of the privacy of data collected via the smart grid. State legislators have begun to take notice. In 2010, California enacted the first state law specifically protecting smart grid data. The statute provides that, with certain exceptions, “[a]n electrical corporation or gas corporation shall not share, disclose, or otherwise make accessible to any third party a customer’s electrical or gas consumption data.” Cal. Pub. Util. Code § 8380(b)(1). While various general purpose state and federal privacy laws may in theory protect the privacy of data collected via the smart grid, a 2011 article concluded that existing privacy protections are inadequate. See Cheryl Dancey Balough, Privacy Implications of Smart Meters, 86 Chi.-Kent L. Rev. 161 (2011). 

3. Users’ lack of awareness. These privacy and security issues are exacerbated by the fact that most users of IoT devices have very little understanding of the data flows associated with their usage and their potential impacts. Consumers are not accustomed to the idea that the everyday devices with which they interact, and which formerly were their mute and obedient servants, may now be sharing their personal information with unknown third parties. Her refrigerator may be blabbing to Electrolux (the Swedish manufacturer that in September 2014 purchased General Electric’s appliances division), her thermostat may be whispering to Google (the new owner of the Nest connected home thermostat). Her Fitbit is definitely collecting information about her movements and sending it to Fitbit, Inc.’s computer servers. Fitbit’s privacy policy, available from its website, states that it does not share personally identifiable information with third parties except in narrow circumstances. But it also advises that “this policy may change over time. If any modifications substantially change your rights under this policy, we will send you an email where possible, and always provide notice on the Site.” How many Fitbit users will ever consult the manufacture’s website to assess the privacy implications of wearing that activity tracking device? This points to a key privacy issue with IoT devices: typically those devices will lack a user interface that makes it feasible to deliver notice to the consumer about the privacy implications of her use of the device. 

The manufacturers of IoT devices are apparently not trying very hard to inform consumers of the uses that are made of personally information collected via these devices. A study of health and fitness smartphone and tablet apps by a privacy advocacy group found that: 

  • 26% of the free apps and 40% of the paid apps had no privacy policy
  • Only 43% of free apps and 25% of the paid apps provided a link from within the app to a privacy policy on the developer’s website
  • 39% of the free apps and 30% of the paid apps sent data to someone not disclosed by the developer
  • Only 13% of free apps and 10% of paid apps encrypted all data transmissions 

See Privacy Rights Clearinghouse, Mobile Health and Fitness Applications and Information Privacy: Report to California Consumer Protection Foundation (2013). 

4. Manufacturers’ lack of incentives. The tendency of IoT devices to lack effective privacy and security features is due in large part to the absence of any perceived incentive to expend resources on such features. Manufacturers believe, perhaps with justification, that they will get little return on their investment in such improvements. As a speaker at the FTC’s 2013 hearings noted: “There is no financial incentive to companies to make their devices secure. When is the last time you saw a bad review on Amazon because some product had a security vulnerability? Never.” 

Researchers have demonstrated the vulnerability of certain IoT devices to unauthorized access and control, including implantable medical devices and automobiles. Some implantable medical devices are designed so that they can be controlled using a wireless connection to a device outside the body. Naturally, only medical personnel and the patient should be able to exercise such control. But a 2008 study demonstrated that a particular implantable pacemaker/defibrillator could be taken over and controlled by an unauthorized attacker. A simulated attack succeeded in determining what type of device was implanted, intercepting telemetry data from the device, changing the device settings, and delivering an electrical shock. Other studies have demonstrated the vulnerability of automobiles equipped with telematics systems. See Halperin et al., Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses (2008). A 2011 study showed that an automobile’s systems can be compromised through a variety of routes. One method required inserting a CD with special files on it into the car’s CD player. Another attack used a smartphone that connected to the car’s systems using Bluetooth, enabling it to deliver a Trojan horse application. A third attack was initiated by making a phone call to the car, which exploited a flaw and caused the car to download malicious code from the Internet. Since all of the car’s electronic systems are connected via the controller area network bus, once the car is compromised through any route, the attacker has access to all of the systems. The simulated attack was able to cause the car to Tweet the car’s GPS location, record conversations in the car’s cabin (using the microphone intended for hands-free phone operation), and command the car to unlock its doors on demand. See Checkoway, et al., Comprehensive Experimental Analyses of Automotive Attack Surfaces (2011). 

5. Invasion of privacy by design. Some IoT devices have features that are designed to invade the user’s privacy. That was the case with software installed on rented laptop computers. Aaron’s, a national chain of rental outlets, installed PC Rental Agent software on computers that it rented to consumers. In addition to allowing the computer to be disabled remotely, the software collected personal information about the user. According to the FTC, the software had the capability of “logging keystrokes, capturing screenshots, and using the computer’s webcam,” and collected personal information such as passwords, medical information, bank statements, and photographs of the users in intimate situations. Aaron’s did not always disclose the presence of the software to the renter, or obtain her consent. Aaron’s settled an action that the FTC brought, agreeing to entry of an order prohibiting it from using monitoring technology on a rental computer. See In re Aaron’s, Inc., FTC Docket No. C-4442 (2014) (consent order). 

Remote Disablement of IoT Devices

Another issue relating to the IoT that has not yet received much notice is the remote disablement of IoT devices by the vendor. Unlike ordinary consumer devices, which once sold are beyond the control of the device’s seller or manufacturer, an IoT device may remain accessible via the network. This allows the vendor to interact with the device so as to modify its functioning. 

A well-publicized instance of this occurred in 2009, when remotely deleted two e-books, George Orwell’s 1984 and Animal Farm, from the Kindle e-book readers of consumers who had purchased the books. Amazon justified the deletions on the ground that it had discovered that the company that provided the texts of the books did not have rights to them; deleting the books would un-do Amazon’s unauthorized distribution of the texts. Amazon gave refunds to the affected consumers, but this did not prevent it from receiving a hail of criticism. A lawsuit against Amazon brought by a 17-year-old high school student in Michigan resulted in a settlement under which Amazon agreed not to engage in such conduct again. 

A manufacturer or retailer may have various reasons for building in the capacity to disable a device remotely. Prominent among these is to retain the ability to put pressure on a buyer who purchases the device on credit and fails to make a required payment. Disabling the device, or threatening to do so unless payment is made, is an alternative to physical repossession of the device that offers substantial advantages to the creditor. 

This practice has its antecedents in remote disablement of computers when the vendor believed that the owner owed money on the hardware or software. One method of accomplishing this is by including programming, called a “logic bomb” or “time bomb,” which causes the computer to stop functioning on a certain date if the user fails to enter a code that the vendor will only supply once the user makes the payment that the vendor demands. Several courts held this procedure to be improper. See, e.g., Werner, Zaroff, Slotnick, Stern & Askenazy v. Lewis, 155 Misc. 2d 558, 588 N.Y.S.2d 960 (N.Y. Civ.Ct. 1992) (breach of contract); Clayton X-Ray Co. v. Professional Systems Corp., 812 S.W.2d 565 (Mo. Ct. App.1991) (breach of warranty and conversion). A logic bomb that destroys computer data may also constitute a computer crime. See State v. Corcoran, 522 N.W.2d 226, 186 Wis.2d 616 (Wis. Ct. App.1994). However, software deactivation pursuant to the terms of a lease or sale may be justified. See American Computer Trust Leasing v. Jack Farrell Implement Co., 763 F.Supp. 1473 (D. Minn. 1991), aff’d and remanded sub nom. American Computer Trust Leasing v. Boerboom Int’l, Inc., 967 F.2d 1208 (8th Cir.1992). 

An increasingly common use of remote disablement is in connection with automobiles that are purchased on credit. The vendor equips the car with a piece of equipment called a starter interrupt device. If the borrower fails to make a payment, the lender can send a signal to the device using his or her computer or smartphone, disabling the car’s ignition system. The device is typically used on cars that are sold on credit to borrowers with low credit scores. According to one report, the devices are used in about 25 percent of purchases that are financed with subprime auto loans. Starter interrupt devices replace the repo man with a system that is much more efficient. The devices incorporate GPS technology, so the lender can track the car’s location in real time. This allows the lender to shut down the car at a time that will minimize harm to the borrower, and potential damage to the car, such as by waiting until the car is sitting on the borrower’s driveway. The resulting intrusion on the borrower’s privacy is readily apparent. Some borrowers claim that their cars have been shut down while driving on a freeway or idling at a stop light, but manufacturers of the devices say that they are designed so this cannot occur. See Michael Corkery & Jessica Silver-Greenberg, Miss a Payment? Good Luck Moving That Car, N.Y. Times (Sept. 25, 2014), at A1. 

The legality of using remote disablement to enforce a creditor’s rights is in many situations unclear. If the device in question secures a loan, Article 9 of the Uniform Commercial Code may apply. UCC § 9-609 limits the right of a secured party to exercise self-help in case of default, allowing equipment to be rendered unusable without judicial process only if it can be done “without breach of the peace.” This limitation would apply if a creditor sought to enforce a security interest in a business computer by disabling it via the network. The UCC does not define “breach of the peace” and the courts have not developed any uniform interpretation, so its applicability to remote disablement is unclear. However, the absence of physical confrontation makes a breach of the peace less likely with remote disablement than with sending out of the traditional repo man. 

Several state enactments of § 9-609 impose additional limitations on this sort of electronic self-help. See Conn. Gen. Stat. Ann. § 42a-9-609(d)(2) (providing that electronic self-help is permitted only if the debtor has separately agreed to it, and the creditor gives 15-days’ notice before exercising self-help); Colo. Rev. Stat. Ann. § 4-9-609(e) (providing that the secured party may not disable a computer program embedded in the collateral “if immediate injury to any person or property is a reasonably foreseeable consequence of such action”). 

The use of remote disablement in connection with consumer transactions is less clear. Section 9-609 does not govern the disablement of consumer goods, such as an automobile for personal use. The Uniform Computer Information Transactions Act (2000) (UCITA) prohibits use of electronic self-help as a remedy in case of breach of a license agreement in a consumer transaction. See UCITA § 816 (prohibiting use of “electronic self-help” in “mass-market transactions”). UCITA has been enacted only in Virginia and Maryland. The ALI Principles of the Law of Software Contracts (2010) (“ALI Principles”) forbid use of electronic measures as a remedy for contract breach in consumer transactions. See ALI Principles § 4.03 (forbidding use of “automated disablement” in consumer transactions). The ALI Principles are not law, but only a statement of recommended best practices. 

There is little law that directly addresses the use of starter interrupt devices. In 2012, the California legislature enacted a law specifically governing the use of such devices by automobile dealers who provide their own financing and retain a security interest in the vehicle; such dealers often sell used cars to buyers with poor credit histories. The provision allows use of the devices as long as the seller notifies the buyer of the existence of the device at the time of sale, and provides 48-hours warning before using it. See Cal. Civ. Code § 2983.37. Many states have laws that regulate repossession of consumer goods, but it is unclear whether remote disablement of a vehicle, which does not entail any loss of possession, would qualify as repossession under these statutes. Regulators in several states have issued informal opinions concluding that the use of starter interrupt devices is unlawful. For example, a 2012 opinion letter from the Wisconsin Department of Financial Institutions states that the use of starter interrupt devices “is prohibited” on the ground that it constitutes an unfair collection practice and is unconscionable. 


Given the huge interest and investments that anything relating to the IoT is drawing, and with an anticipated installed base of billions of IoT devices in the coming years, lawyers, companies, and regulators are well advised to keep a close eye on this technological space.

Additional Resources

For other materials on this topic, please refer to the following.

BLS Programs Material Library

Calling All Toasters: Risk Management for the Internet of Things (PDF)
2014 BLS Spring Meeting

John A. Rothchild

Associate Professor, Wayne State University Law School

John A. Rothchild is an associate professor at Wayne State University Law School.