chevron-down Created with Sketch Beta.
March 01, 2016

Tiny Sensors, Huge Consequences

Unregulated Inferences from Big Data Create Ethical and Legal Dilemmas for Businesses and Consumers

The 1983 hit song “Every Breath You Take” by the Police was written as a heartbroken love ballad, but many of its lyrics also accurately describe the relationship we have with technology today. Every breath we take and every move we make can be recorded somewhere, voluntarily or involuntarily, by surveillance cameras, smartphones, and wearable devices such as health and wellness trackers like the Fitbit. Wearable devices track information about our movements, heart rates, sleeping habits, weight, etc. The signals emitted by smartphones when we are shopping in a store can be used to monitor the time spent in different aisles and eventual purchases. Home security systems track when we enter and exit our homes and how much time we spend in each room. Smart sensor technology for thermostats can gather data on our energy use and which rooms we use as well as our preferred temperatures.

Welcome to the Internet of Things (IoT), a world where we’re surrounded by electronic sensors that are collecting unprecedented quantities of personal information.1 When all of these data points are pooled together as sensor fusion data, they can be combined in unexpected ways to draw a wide variety of highly accurate and surprising inferences. Businesses collect this data and apply predictive analytics to sort people into groups that can be targeted for products and services. However, this can become discriminatory if the data is determining who will be selected to access services and products while excluding others. This problem can become more acute when the services and opportunities being offered are public, rather than private, such as access to education, power distribution, or voting.

Although not all discrimination is illegal, the information that is amassed from IoT raises some serious legal and ethical challenges that are not currently adequately addressed by legal and regulatory frameworks. When data is woven together to create new inferences, where is the line between useful data and discriminatory data that may be unintentionally encoded with biases regarding race, gender, economic buying power, or some other sensitive category? Where is the line between privacy and consent, when much of our information is collected involuntarily or without our knowledge? What types of risks do companies expose themselves and customers to when data is collected and used? How should policymakers address these issues?

Potholes, Wearables, and Discrimination

Data mining starts with data about human conduct, then learns from this data to generate new data. For example, analyzing ad clicks and time spent on a website can be used to gauge interest in a news topic or product. But if the original data input that created that learning algorithm is flawed, it can create unintended consequences.2

For instance, St. George’s Hospital Medical School in the United Kingdom created a computer program in the early 1980s that sorted applicants to its medical school based on previous admissions decisions. Those admissions decisions had systematically disfavored racial minorities and women in the past, so the hospital had unknowingly created an algorithm that inherited previous social biases. A commission that investigated this noted that many of the school’s staff “had no idea of the contents of the program and those who did failed to report the bias.”3

There’s also a digital divide. People with less access to the Internet and formal economy, including historically disadvantaged groups and rural populations, are more likely to be excluded from big data collection than the general population. This means that not all data is collected equally and could lead to underrepresentation and unequal treatment. An article in Foreign Policy magazine noted that Boston’s Street Bump app, which gathers smartphone data as drivers drive over potholes and sends it to the city, creates a self-selecting sample because “it will necessarily have less data from those neighborhoods with fewer smartphone owners, which typically include older and less affluent populations.”4 If the city focused solely on this data to determine street repairs, underserved communities would not receive governmental services to repair their streets.

One of the most pervasive technologies on the market today is wearable health and wellness trackers such as the Fitbit, Apple Watch, Garmin Forerunner, and many others. These devices help users track and capture data such as exercise, dietary and sleep habits, psychological stress, heart rate, weight, and other health information. Although a consumer may only use a health tracker for wellness purposes, absent any regulations, the data could be used to create inferences that could affect health or life insurance coverage and premiums. The amount of exercise or sleep someone gets may influence life or auto insurance coverage or determine someone’s credit worthiness if an inference is made that conscientious exercisers are better credit risks. An employer might pass on a job candidate if an inference is made that people with healthier personal habits will be more diligent employees.5

Suddenly all of these data points create pictures of people’s lives that can be used to determine their access to credit, employment, and other services without any regard for accuracy or any opportunity to know of or dispute inaccurate information attributed to them. While there are some regulations about the information (including inferences) that goes into a credit decision, most inferences remain unregulated.6

Traditional privacy and information laws and antidiscrimination laws such as Title VII, the Americans with Disabilities Act (ADA), and the Genetic Information Nondiscrimination Act (GINA) do not address discrimination based on data inferences.7 Insurers may choose to avoid insuring or charge more to someone with undesirable risk preferences that deem them too expensive to insure. An employer could pass on hiring a potential employee based on health status, as long as this person doesn’t suffer from an ADA-covered disability. A health trait such as nicotine addiction or obesity could be correlated as voluntary conduct and inferred as an immutable trait, when in reality both may be less voluntary and more biologically determined.

Hey Alexa! Let’s Talk Policy

Without clear regulatory guidance, there is substantial risk for both consumers and companies. Along with the potential legal issues tied to data collection, privacy, and security, there is an emerging array of ethical and potentially life-threatening issues. Many wearable devices are designed by consumer goods manufacturers and are not engineered to protect data security. One researcher found that wireless insulin monitors can be hacked to give a diabetic patient an inaccurate reading, which could cause the person to improperly administer an insulin dose.8

Historically, policymakers and the judicial system are reactive rather than proactive in the face of new technology. The market and consumers wade into the water first and test the limits of reasonable conduct until harm is exposed and regulators jump in. This is when regulators often turn to industries for guidance on what reasonable conduct should look like. We’ve seen this play out with unmanned aerial vehicles (UAVs), more commonly known as drones, which were available on the consumer market for some time with few regulations until privacy, copyright, security, physical injury, and other risks started popping up in courts and headlines across the country. Federal agencies such as the FBI and the Federal Aviation Administration are now taking a closer look at formal regulations.9

This is a great time for businesses and companies to be more introspective about their IoT products and services to mitigate potential liability and protect consumers. By taking steps to define what is reasonable and unreasonable conduct, businesses can play an early role in shaping future big data policies rather than reacting to regulations made by policymakers who may not fully understand the depth and nuances of technology.

Businesses looking to launch new products with data collection features should assess the risks and harm that may be created. Companies cannot prevent all harms, but they should at least have a reasonable process to identify the risks, contain them, and control how the data is used. If the data is used to make inferences, a business should consider the potential discriminatory aspects, how it fits into today’s legal framework, and responsive solutions.

The Amazon Echo, which hit the market in 2014, is a good example of thinking through risks and incorporating safeguards into design. The Echo is a voice-activated box that responds to a user’s verbal requests. Similar to the voice-activated Siri on Apple iPhones, Echo’s artificial intelligence concierge responds to “Alexa” and can do things like tell you the time, turn on your favorite music, retrieve sports scores, set an alarm, add items to a grocery list, and order an Uber.10 By simply saying, “Hey Alexa,” the unit lights up and its microphone records your voice, sends it to a cloud-based server to decode, and then delivers a response. The Echo can also be activated with a button or remote control.

With its constant presence, the Echo could have recorded everything said inside a consumer’s home at all times, which would have created massive amounts of risks and a huge invasion of privacy. Most buyers likely wouldn’t assume that type of privacy risk if they knew about it. Rather than creating a system that is constantly recording and transferring information to the company’s servers, Amazon made the Echo voice-activated so that it only collects and stores information that is needed to provide the service and no more. Consumers also know that they can delete their voice recording requests from Amazon’s servers through an Echo content management app.11 Amazon’s approach to the Echo is a balance between mitigating the risks of collecting information in a way that consumers did not anticipate and setting data collection expectations between the user and company.

Conclusion

We are in the midst of an exciting technological revolution that could, among other things, extend our lives, provide more safety and security, and create energy efficiencies. There is no one-size-fits-all solution to resolve all potential regulatory and ethical issues associated with IoT and big data. Perhaps we should end where we began and take a breath. What is reasonable for one company or business may not make sense to another. Best practices often include discernment of what data is needed for a particular purpose, how it will be collected, and for what purpose it will be used. As the data becomes more sensitive or the use unexpected or potentially discriminatory, the traditional privacy principles of notice, choice, data integrity, access and amendment, accountability for transfers, and security should be carefully analyzed and considered. u

Endnotes

1. Scott R. Peppet, Regulating the Internet of Things: First Steps Toward Managing Discrimination, Privacy, Security, and Consent, 93 Tex. L. Rev. 85, 89 (2014).

2. Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Cal. L. Rev. 671, 674–75 (2016).

3. Stella Lowry & Gordon Macpherson, A Blot on the Profession, 296 British Med. J. 657 (1988).

4. Kate Crawford, Think Again: Big Data, Foreign Pol’y (May 10, 2013), http://foreignpolicy.com/2013/05/10/think-again-big-data/.

5. Peppet, supra note 1, at 123–26.

6. See, e.g., Fair Credit Reporting Act, 15 U.S.C. §§ 1681 et seq.

7. Peppet, supra note 1, at 124.

8. Id. at 134.

9. See Keith Laing, FAA Readies Drone Registration Rules, The Hill (Nov. 20, 2015), http://thehill.com/policy/transportation/260910-faa-readies-drone-registration-rules.

10. Dave Smith, 17 Surprisingly Useful Things Amazon Echo Can Do, Tech Insider (Feb. 6, 2016), http://www.techinsider.io/amazon-echo-17-features-2016-2.

11. Alexa and Alexa Device FAQs, Amazon Device Support, http://www.amazon.com/gp/help/customer/display.html?nodeId=201602230 (last visited July 18, 2016).

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.