September 20, 2016

CYBER CENTER: Fairy Tale or Nightmare: The Promise and Peril of the Internet of Things for Business

Stephen A. Riga

Once upon a time, the facilities manager of a large financial group asked a lawyer to review a service contract with a promising new vendor. The pitch was as rich as a pot of gold: energy savings of approximately 10 percent through the magic of the Internet, thousands of dollars in utility bills avoided through smarter power use, turning off a thousand lights left on across the corporation by using sound and movement sensors that can tell when a room is empty.

The diligent attorney got to work, but it was not long before the worries began. The first red flag was in the contract itself. It was not just a software contract, granting permission to use proprietary systems that determined when to turn off an unused light. The contract also provided for cloud storage and web portal access, promised security safeguards, and imposed indemnification for those situations where the vendor, through negligence or willful action, failed to protect that data.

No real concerns so far, but the contract also required the customer to represent that its collection and use of the data was permitted, and that it would comply with applicable privacy laws and would obtain those authorizations necessary to collect the data that would be gathered by the equipment and stored by the vendor. The contract went on to provide that failure to comply with such provisions could result in indemnification of the vendor by the customer.

Whose authorization might be needed and why? And so a trip down the rabbit hole began.

The Internet of Things (IoT), a web of everyday objects that connect to the Internet, has arrived. Gartner, Inc., an information technology research and advisory company, estimates that 6.4 billion objects will connect to the Internet in 2016, but that number is quickly expanding. Gartner projects that 5.5 million new things will be connected each day in 2016, and that by 2020 there will be over 20 billion connected objects. The focus in the press has been primarily on the consumer side of this new world, with individuals tracking their steps, managing their homes from their smartphones, and looking forward to the driverless cars that have been promised since the 1939 World’s Fair. The corporate environment has received less attention, but many of the promises made for personal uses also offer great potential for corporate users. For employers, new tools promise greater savings through energy conservation, increased efficiency, and even restructured workforces as information previously requiring employees to collect and sort, and to implement changes based on the results, now can be managed remotely with a few clicks of the mouse.

These new technologies introduce a host of new concerns, however, because introducing sensors into the everyday world can have a multitude of unintended consequences. For the unwary, the IoT represents both an opportunity to realize great savings and a trap that could end up swallowing those savings whole. Companies are awakening to just how vulnerable connected objects can be as the evidence accumulates that smart cars, medical devices, and virtually any other connected object can be hacked—often in ways that compromise both security and safety.

Companies must weigh their obligations carefully and factor not only their business needs and the legal rules governing how they collect and use data in their daily operations, but also the legal rights and rules that protect their customers and employees.

Data Privacy and Its Laws

Experience has taught the diligent attorney to be cautious. Providing counsel to a financial institution meant knowing something about the Gramm-Leach-Bliley Act, of course. Under the Act, financial institutions are required to describe their information-sharing practices to their clients and to safeguard clients’ sensitive data. Given that the contract required data authorizations, the attorney asked aloud, “Could a tool designed to turn a light off and on implicate Gramm-Leach-Bliley?” A quick Internet search of the vendor’s name and the term “privacy” suggested a reason to think the answer was yes.

Clicking through the results made for a frightening read. A company running an office-management program had implemented a system using similar technology to that offered the financial group: sensors picked up both motion and sound, then transported the information via the Internet to the vendor’s data systems. The audio sensors captured sounds within the room and fed the stream to proprietary software on the vendor’s servers that analyzed the feed for spikes that might evidence activity indicating continued occupancy of the room. The manufacturer used an unencrypted feed, however, and a hack allowed access to that feed, producing an instant microphone in each room the system was deployed.

If mics were unintentionally recording conversations with the institution’s customers and sharing that data with another company, would that comply with Gramm-Leach-Bliley? The diligent attorney did not think so.

Leaning back, a thought came to the attorney: “What a nightmare!”

The IoT is introducing sensors into all sorts of objects and connecting them to the Internet. If not implemented in a manner that addresses privacy and security, these systems can inadvertently produce actions and results inconsistent with a company’s obligations under state, federal, and international law.

Federal Law

Federal laws in the United States address privacy and data security through the regulation of specific sensitive sectors, such as the Health Insurance Portability and Accountability Act (HIPAA), which regulates privacy and security for health-care operations, and the aforementioned Gramm-Leach-Bliley Act, which imposes privacy and security rules for financial institutions.

Although companies operating within a regulated sphere face comprehensive privacy and security obligations, most companies face at least some federal privacy and security obligations. Federal laws impose confidentiality requirements on specific sensitive information, such as information concerning employee disabilities that an employer may receive when addressing employee rights under the Americans with Disabilities Act (ADA).

Where employment and regulated sectors intersect, employers must be particularly vigilant in order to address their role as a part of the regulated sector. Employer-sponsored health plans are subject to HIPAA rules, for example, and if the employer self-insures some or all of its medical benefits, it will be directly responsible for its compliance with HIPAA rules. Even if an employer provides medical coverage through insurance, however, its supporting coverage, such as health reimbursement arrangements (HRAs), health flexible spending accounts (HFSAs), and wellness programs often fall under HIPAA’s definition of “health plan.”

It is with wellness programs that many employers have had their first significant exposure to the challenges posed by the IoT. For instance, some employers have introduced wearables as part of their wellness initiatives, generally working with third parties to track the information necessary to condition incentives on reaching certain activity targets. Selection of a vendor sensitive to the obligations under HIPAA is critical in this context, given that the company should, by design, limit access to the information captured for administration of its wellness program. As such, selection of the vendor may be the employer’s only opportunity to ensure that adequate privacy and security safeguards will be implemented, and for a company to drill down to better understand how and what data is captured and how the vendor might use that information. Failure to do so may result in a violation of HIPAA to the extent that information is improperly used or disclosed and expose an employer to penalties under HIPAA, which can be quite substantial.

Beyond sector-specific regulation, the Federal Trade Commission (FTC) enforces privacy and data-security duties on companies through its regulation of unfair trade practices. Where a company makes a representation about the privacy of information it collects and fails to live up to its stated commitments, the FTC can bring action against the company and impose substantial obligations in settlement.

Given the rapid development of the IoT, very few examples of this form of enforcement by the FTC are available. Its first enforcement action in the area, In re TRENDnet, Inc., No. 122-3090 (F.T.C. Feb. 7, 2014), does provide an illustration of the risks posed by the IoT. The FTC alleged that a vendor of Internet cameras, marketed for use as baby monitors and home and small-business security cameras, made false representations to customers about its security efforts while failing to adequately secure customers’ video and audio. A breach in 2012 resulted in nearly 700 live feeds posted online, including video and audio of babies in their cribs, young children at play, and adults going about their daily business. The breach garnered considerable press, resulting in wide dissemination of private images from these feeds. TRENDnet’s settlement with the FTC required TRENDnet to establish and maintain a comprehensive security program, including third-party review of its efforts, for a period of 20 years.

State Law

At the state level, legal obligations regarding privacy and security vary considerably, although most require the issuance of notices if an individual’s personal information (e.g., Social Security number) is breached. Almost one-quarter of states couple these laws with an obligation to implement safeguards to protect certain personal information. California, on the vanguard of data privacy and security, has integrated a right to privacy into the state constitution. For California and a handful of other states, the law imposes general data privacy and security obligations on all businesses operating within their borders and interacting with their citizens.

State laws can make a company’s legal compliance with privacy and security obligations challenging. Companies must be sensitive to the particular rules governing the jurisdictions in which they operate. Even in the most business-friendly jurisdictions, businesses should be aware that failure to meet legal standards concerning privacy and data security could result in significant exposure.

International Law

Companies that operate outside the United States must also factor the legal frameworks of any country in which they operate or from whose citizens it collects data. Countries within the European Union treat privacy as a human right, and many nations impose comprehensive privacy and security rules for data its laws protect. The ability to collect data is often severely circumscribed. Further, even internally, the ability to share personal information transnationally can require careful planning and documentation.

Companies based in the United States may not have any operations outside the United States, and yet their information may become subject to foreign jurisdictions in certain circumstances. If a U.S. company contracts with a company with operations outside the United States for data-storage services, for example, and the servers are based in another jurisdiction, that storage may be subject to the laws of a foreign jurisdiction. Although the company operating in the foreign country generally will be responsible for compliance with these laws, the impact may be felt by its customers if the company fails to comply with its legal obligations.

Together, this patch quilt of laws and standards can make compliance with privacy and data security laws a challenge to manage. Where such issues are not factored beforehand, the likelihood that a company’s operations will violate one or another applicable standard is high. Companies often do not get a second chance to secure the data they capture. Privacy failures often draw unwelcome scrutiny and costs that can easily eclipse the costs of maintaining an effective, compliant privacy policy.

The risk is not just one involving hacking or external threats. Data captured by the IoT is attractive to companies for many reasons, some of which raise significant compliance issues distinct from the obligation to protect data from external threats. FTC enforcement actions often involve the use of data collected by companies for purposes other than those advertised by the company when collected. HIPAA precludes the use of health-plan information for employment purposes by the company that provides the plan to its employees. However useful a wearable may be for tracking an employee’s activities, if that data is captured by a wearable provided as part of a health plan to verify an employee’s inability to work, for example, such a use would violate HIPAA. Vehicle data downloaded might evidence activities related to medical care for conditions that constitute a disability, implicating the ADA.

If the IoT and the data it captures is not carefully managed, improper usage or disclosure of data is likely unavoidable.

Data Security and the IoT

After further investigation, the diligent attorney determines that the vendor used sensors that did not record or transmit a live feed. The attorney was relieved, but haunting questions remained. Safeguards are easy to promise, and the news illustrates how such efforts can go awry. Even if the financial group’s sensitive customer data was not implicated, were there any other risks that the technology might introduce?

Perhaps a call to IT was in order.

The reality is that the IoT represents a significant shift in the arithmetic of risk, even if all legal responsibilities related to the introduction of IoT technology are addressed. It is simply a question of numbers. Today, an individual may have several different devices that connect to the Internet. Computers, tablets, and smartphones often are all integrated as part of an employee’s workflow, and for many they also constitute a major part of their private lives. Nevertheless, the number of devices accessing the Internet per individual is often in the single digits. If the projection of 20 billion objects connected by 2020 proves correct, each person in a typical work environment may come into contact with literally dozens of connected objects each day. Such objects will, in short, become ubiquitous.

Most IoT objects connect to the existing IT infrastructure, piggybacking on current systems to reach the Internet. For planning and assessment, it is critical to remember that each object (and its connection) represents a potential vulnerability to the systems with which it connects. It has been a long-standing challenge for corporate IT to manage the introduction and use of new equipment. Managing existing networks of computers, laptops, tablets, and smart phones, as well as all of the equipment connected to and supporting these systems, has been and continues to be hard work. The IoT promises to make that job much, much harder.

The challenges posed by the onset of bring-your-own-device (BOYD) policies are likely to be revisited with the IoT. Employees will be tempted to bring a host of additional objects into the workplace with the expectation that such wearables, vehicles, and tools will continue to connect and work. Employers may opt, like some did with smartphones, to tightly restrict their employee’s use of employer systems for IoT access to the Internet, limiting such access to those objects provided by the employer itself—over which the employer can exercise at least some control. Others are likely to be more permissive, allowing employees to bring their cavalcade of Internet-connected objects but controlling how and to what extent the foreign objects are permitted to connect to company systems. Still others will attempt a hands-off approach, not engaging the problem until some event forces their hand.

Like BYOD policies, restrictive or permissive approaches to the IoT each have their own advantages and disadvantages. Restrictive policies afford greater control, but will undoubtedly pose a political challenge for employees who are prevented from using their flashy new IoT gear via a company network. Permissive policies pose a greater risk that some object connecting to corporate systems will compromise it, but such risks can be managed with carefully crafted controls, and permissive policies promise both greater employee satisfaction and more efficiency to the extent that the objects are valuable to the employees and their loss would be disruptive.

A hands-off approach, however, even if it does not violate one or another of the laws obligating a company to safeguard information, is ill advised given the looming flood of technology. Failure to protect systems is likely to have significant, harmful ramifications, including introducing tainted equipment into corporate networks, and becoming exposed to exploits that may do far more than simply allow an unscrupulous individual access to sensitive information. Ransomware and similar attacks can produce dire results, destroying systems and data unless their ransoms are paid, and sometimes even if they are paid. Active management of this risk is just good business sense and is at the heart of the services attorneys provide their clients.

The head of IT set the diligent attorney’s mind at ease. With an established policy for the introduction of new connected technologies in place, IT had confirmed that the technology met the company’s standards; indeed, planning was already well under way to integrate the proposed system into existing systems while controlling access to the data and protecting it from risks both internal and external. The nightmare scenarios receded, and the attorney finished the contract review, diligently tweaking that indemnification clause one last time before returning it to the facility manager.

Stephen A. Riga

Stephen A. Riga is an attorney in the Data Privacy and Employee Benefits groups at Ogletree, Deakins, Nash, Smoak & Stewart, P.C.’s Indianapolis office.