chevron-down Created with Sketch Beta.
September 25, 2020 Feature

The Brain on Your Wrist: The Legal Implications of Wearable Artificial Intelligence

By Gary Marchant

Artificial intelligence (AI) is rapidly disrupting every industry in our economy and sector of our society, and now is starting to be something we wear on our bodies. The second generation of mobile health technologies go beyond just measuring and recording data such as number of steps taken or hours slept, to now using artificial intelligence to undertake complex tasks such as predicting heart problems, diagnosing mental diseases, or helping to prevent coronavirus infection. The power, capabilities, and use of AI-enabled wearables will further increase with the rollout of 5G technology.

Wearable technologies are already improving our health, and we are only at the beginning of the mobile health revolution. First-generation wearables such as fitness trackers have the effect of increasing the number of steps per day that users take, which is associated with reduced mortality.1 The second wave of wearables is using machine learning and other AI techniques to provide even more complex and valuable health and wellness benefits. For example, the latest versions of the Apple Watch come with a built-in electrocardiogram (ECG) for monitoring your heart rhythms and atrial fibrillation. There have already been anecdotal reports of this feature saving lives by detecting previously undetected heart irregularities.2 Fitbit, recently purchased by Google, has enabled blood oxygen level monitoring on some of its existing models3 and will use Google’s new Cloud Healthcare program to connect Fitbit user data with electronic medical records (EMR).4

A number of companies are developing wearable “smart” sweat sensors that can monitor dehydration and glucose and sodium levels in real time in athletes and workers, as well as drug metabolism and inflammation biomarkers in patients being treated for a number of health conditions.5 The potential to replace the frequent and often painful needle pricks in diabetes patients with continuous and noninvasive monitoring of blood glucose levels is an another example of the important health benefits that wearables may soon provide. A number of companies are also working with university researchers to develop AI wearables that can detect COVID-19 days before traditional symptomology manifests, a model that is also being tested for earlier and more sensitive detection of high blood pressure, the flu, and some cancers.6 Other applications of wearables now or soon to be available include wearables to track and record athletes’ biometric and performance data; period trackers to help women keep track of their menstrual cycles; sound monitors to warn of excessive decibel levels; devices that measure employee stress in the workplace; wearable panic buttons for hotel housekeepers and workers who are frequently in vulnerable situations; driver drowsiness alerting systems using wearables such as glasses, a headband, or a headset; wearable sensors that measure exposure to toxic chemicals in the workplace or environment; ear buds with an electrical nerve stimulator that calms anxiety; and location tracking and contact tracing for exposure to COVID-19 in workplaces and senior living facilities.

These and many other “smart” health wearable technologies potentially offer immense benefits to consumers and patients. Rather than basing health care on a single annual fifteen-minute checkup in a physician’s office, smart wearables will be able to collect and analyze data 24/7 year-round, in every location, condition, and circumstance we experience. Such real-life data could function as a “check engine light” to provide early notice of potential health problems. While smart watches and fitness bands worn on the wrist are the primary mode of wearables, smart AI-empowered wearables are also being developed to be worn as rings, pendants, glasses, headbands, headsets, knee braces, ear buds, skin tattoos, various items of clothing, and implanted devices. Moreover, the expanding collection and use of wearable data are part of the shift to patient-centered medicine, where the patient has primary control over his or her own health data.7

In addition to the benefits to individual patients or consumers from the collection of personalized wearable data, the massive quantities of data collected by wearable devices provide important opportunities for health research and public health protection. For example, Fitbit has collected 150 billion hours of heart-rate data from tens of millions of people, and over 6 billion nights of sleep data.8 This massive data collection can be interrogated using AI systems to look for previously undetected patterns between activities and health and wellness.

While the potential benefits of smart health wearables are both substantial and exciting, these technologies present some important legal challenges involving data privacy, data security, data accuracy, liability, bias, and unanticipated applications. A thorough analysis of each one of these legal issues could easily fill an entire article, but for the sake of completeness, they are all summarized briefly below.


Machine learning AI, as its name implies, “learns” its capabilities from data, rather than rules programmed by humans. As such, wearable AI systems need large amounts of data, both to train the system initially and then to collect from the wearer in operation to make predictions and diagnoses about that specific individual. Those data, especially in the operation phases, will often be highly sensitive medical and other personal data about the individual user. Sometimes the privacy of those data will be protected by the Health Insurance Portability and Accountability Act (HIPAA), such as when the data are streamed to the user’s physician (a “covered entity” that triggers HIPAA protection). But in most cases, the wearable will collect health, wellness, and lifestyle data that are not transmitted to a physician, and thus are not protected by HIPAA. Some legislators have proposed to expand HIPAA to provide privacy protection to much of this wearable data, but unless and until such legislation is enacted, the privacy of wearable data will depend on the commitment and practices of the wearable manufacturer, potentially backed up by Federal Trade Commission (FTC) enforcement against unfair or deceptive business practices.

The sharing of data from health apps and wearables with third parties has become fairly routine, unbeknownst to most consumers.9 Various insurers, especially in the life insurance industry, are now incentivizing or requiring sharing of wearable data as an incentive for lower rates or as a condition of coverage. Concerns about such data sharing with third parties have been exacerbated by revelations that health tech companies can unilaterally change their terms of service, perhaps to permit new uses or sharing of consumer data, without needing consent from the consumer wearing the wearable.10 There have also already been privacy breaches of highly personal wearable data, so privacy will remain a central legal and policy concern with data-powered AI wearables. It is also the leading reason why many people decline to take advantage of smart wearable technologies at this time, stalling the adoption of this technology.11


AI-enabled wearables are likely to collect large amounts of sensitive data that could be stored in multiple locations, including by the wearable device itself, a smartphone linked to the wearable, the wearable manufacturer, and possibly the user’s physician. Data stored at each of those locations are potential targets of data hackers and cybercriminals. A smartphone app called MyFitnessPal that tracked exercise and diet was hacked in 2018, exposing the data of up to 150 million users that were then offered for sale on the dark web. Health-related data are some of the most valuable information on the dark web, so if it is not adequately protected, it will likely be stolen.

The security of a wearable is likely to be the weakest link in a digital healthcare ecosystem because the wearable is unlikely to have the updated security protections that a smartphone, computer, or web-based storage site will have. If the wearable is classified as a medical device, it will be subject to FDA’s guidance on cybersecurity for medical devices finalized in December 2016, which requires the device manufacturer to follow best practices in implementing a risk management program for cybersecurity.12

Data Accuracy

For wearable health AI to be useful, it must produce reliable and accurate data and findings. Yet, studies have found that the evidence on the performance of interactive diagnostic apps “is sparse in scope, uneven in the information provided, and inconclusive with respect to safety and effectiveness.”13 For example, a 2017 study by Stanford researchers found that seven wearable fitness trackers generally provided accurate results in measuring heart rate but poorly estimated calories burned.14 Most current wearables do not require FDA approval, as the agency has chosen to exercise “enforcement discretion” against most wearables that measure wellness or lifestyle factors, even though these could be important health inputs. The FDA only requires wearables involved in the diagnosis or treatment of a specific, named disease to go through agency review, and that will usually consist of a clearance under section 510(k) of the Food, Drug and Cosmetic Act that does not involve an in-depth review of safety and efficacy. A major challenge for the regulation of wearable AI, like all AI health systems, is that the AI is continuing to learn and hopefully improve with the collection of additional data. The FDA is currently grappling with this need to transition from the previous paradigm of “locked down” product approvals to a new AI-friendly paradigm of adaptive regulatory approvals and monitoring.15

For the majority of wearables that do not require regulatory review, the accuracy of wearable data will depend on the manufacturer’s diligence and commitment. Because extensive empirical data sets may be needed to demonstrate accuracy, in all the various scenarios in which the wearable will be used, not all manufacturers will be capable or willing to provide the assurance of accuracy that customers are likely to expect from their wearable device. Consumers and their physicians will have no way of knowing whether the data from a particular wearable are reliable or not. Moreover, if a wearable device produces a high rate of false positives, for example, neither the user nor his or her physician will be able to identify and trust a true positive result. These inaccurate results not only diminish the utility and benefits of wearables, but there are a number of ways that incorrect or incomplete information can harm consumers. False positive results can lead to unnecessary noninvasive or invasive testing and potential overtreatment, or false negative results could provide a false reassurance that an at-risk person is healthy.16 There is already at least one anecdotal report of an asymptomatic patient being physically harmed by intrusive testing triggered by a false positive wearable result.17 Finally, wearables and other apps may generate so much data that the user and his or her physician are overwhelmed by a phenomenon called data saturation.18 One potential strategy to address this governance gap is to create a private self-certification system for wearables in which manufacturers answer a series of questions about their product that would be made available online for consumer and physician viewing and comment, as has recently been proposed for mental health apps.19


AI-enabled health wearables could raise novel liability risks. For example, the manufacturer of a wearable that falsely assures a user of the product or fails to detect a health risk in the user could be held liable if the failure results in the user becoming ill or is otherwise harmed. In the past, software programs were generally treated as a service or a good rather than as a product for purposes of liability, which results in lower liability risk.20 However, if an AI software program in a wearable causes physical injury or harm, as we have already seen with autonomous vehicle and robotic surgery mishaps, then it would likely be seen as a “product” and product liability doctrine would apply. In such a case, the manufacturer could be held liable for a design defect if a jury concludes (depending on the jurisdiction) that there was a reasonable safer alternative to the technology or the product failed to meet consumer expectations. The manufacturer may also be held liable for failure to warn if the product’s instructions failed to provide sufficient warning of potential risks. For example, if a device classified a skin mole as noncancerous and it turned out to be a melanoma, the patient might sue the device manufacturer for its faulty diagnosis.

Under product liability, a product manufacturer may also be subject to punitive damages and class actions. However, if the mobile device was approved by the FDA with a pre-market authorization (PMA), the most burdensome regulatory pathway, then state product liability laws may be preempted,21 an important consideration for the manufacturers’ counsel in considering the trade-off between pre-market regulatory costs and post-market liability risk.

A physician who receives a data stream from a patient’s wearable could also be held liable under medical malpractice if the data indicate a pending health problem that the physician fails to notice or take action to mitigate. The physician could be held liable if (again depending on the jurisdiction) he or she failed to provide reasonable care or failed to comply with community standards for similarly trained physicians. For both manufacturer and physician liability, a major challenge is the rapid pace of health AI advances that will make it difficult for a manufacturer or physician to determine what is a “reasonable” uptake of the developing technology.22

There is little or no case law on the dimensions of the duty of physicians to monitor patients’ wearable data; for example:

It is unclear . . . if and when the monitoring of a wearable medical device creates a physician-patient relationship that brings a duty of care. Does it matter if the physician provided the device, recommended it, or was merely made aware of the data by the patient? Once a wearable device is known to the physician, when and how often must the data be reviewed? When does a physician’s knowledge of data from a wearable device obligate the inclusion of that information in treatment decisions—and when does the failure to obtain or use data from a wearable constitute a breach of the duty of care? Once a wearable device is known to the physician, when and how often must the data be reviewed?23


One of the most prevalent and serious risks with AI generally is the potential for bias. Because machine learning AI is based on large sets of data, if the underlying data are biased, the resulting algorithm will also produce biased results, usually without any obvious indications of the biased outputs. Because bias based on race, sex, and other factors is unfortunately still common in our society, it is turning out that many AI-based applications, including facial recognition systems, criminal sentencing algorithms, employment algorithms, and healthcare spending systems, produce discriminatory results for historically discriminated-against groups such as African Americans and women.24 AI-based wearables are likely to suffer from similar risks of bias, and the lesson from other examples is that if you don’t affirmatively look for and seek to correct data bias, it is likely there but “hidden in plain sight.”25 Algorithmic bias assessment tools are now available to check for AI bias, and the use of such tools is quickly becoming best practice in all AI applications.26

Unanticipated Applications

Perhaps not surprisingly, having a smart device on your body leads to unanticipated applications. There have already been examples of people trying to game the system to make it look like the owner was much more active than he or she actually was, such as by putting the wearable in a clothes dryer overnight or attaching the wearable to a pet dog’s leg. Data from the first generation of mobile devices have already been used in a variety of legal proceedings, ranging from criminal investigations for serious crimes such as murder, rape, and kidnapping to civil matters involving accidents and workers’ compensation. Prosecutors have subpoenaed and utilized wearable data from pacemakers to fitness trackers to contradict the alibis of criminal suspects, raising questions about the appropriate limits of the Fourth and Fifth Amendments in allowing government access to such data.27 In private lawsuits, ranging from workers’ compensation claims to accident injury cases, wearable data are increasingly being sought in discovery to help establish a party’s daily activities, location at a particular time, and speed and direction of travel.28 This trend in using wearable data in lawsuits is helping make litigation more evidence-based, and leads to one of the best law review article titles ever: “Wearable Devices as Admissible Evidence: Technology Is Killing Our Opportunity to Lie.”29


Wearables represent a new era of patient-focused healthcare and consumer self-quantification. This technology offers many potential applications and benefits. However, it also presents a number of legal concerns and issues. It will be the role and responsibility of attorneys to anticipate and seek to mitigate such problems, crafting legal solutions that promote rather than suppress the innovation and benefits of this emerging technology.


1. Pedro F. Saint-Maurice et al., Association of Daily Step Count and Step Intensity with Mortality Among US Adults, 323 JAMA 1151, 1158 (2020).

2.. Aaron Holmes, A Texas Man Says His Apple Watch Saved His Life by Detecting Problems with His Heartbeat, Bus. Insider (Nov. 25, 2019),

3.. Heather Landi, Fitbit Rolls Out Blood Oxygen Tracking with an Eye Toward FDA Clearance for Sleep Apnea Diagnosis Feature, Fierce Healthcare (Jan. 17, 2020),

4..Press Release, Fitbit, Fitbit and Google Announce Collaboration to Accelerate Innovation in Digital Health and Wearables (Apr. 30, 2018), available at

5.. Alice McCarthy, The Biomarker Future Is Digital, Clinical OMICs, Jan./Feb. 2020, at 24–28.

6.. Geoffrey A. Fowler, Wearable Tech Can Spot Coronavirus Symptoms Before You Even Realize You’re Sick, Wash. Post, May 28, 2020.

7.. Eric Topal, The Patient Will See You Now: The Future of Medicine Is in Your Hands (2016).

8.. David Pogue, Exclusive: Fitbit’s 150 Billion Hours of Heart Data Reveal Secrets About Health, Yahoo Fin. (Aug. 27, 2018),

9.. Quinn Grundy et al., Data Sharing Practices of Medicines Related Apps and the Mobile Ecosystem: Traffic, Content, and Network Analysis, 364 Br. Med. J. 1920 (2019).

10.. Jessica L. Roberts & Jim Hawkins, When Health Tech Companies Change Their Terms of Service, 367 Science 745 (2020).

11.. Accenture, Digital Is Transforming Health, So Why Is Consumer Adoption Stalling? (2020).

12.. FDA, Guidance: Postmarket Management of Cybersecurity in Medical Devices (Dec. 28, 2016),

13.. Michael L. Millenson et al., Beyond Dr. Google: The Evidence on Consumer-Facing Digital Tools for Diagnosis, 5 Diagnosis 95, 103 (2018).

14.. Anna Shcherbina et al., Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort, 7 J. Pers. Med. 3 (2017).

15.. Boris Babi et al., Algorithms on Regulatory Lockdown in Medicine, 366 Science 1202 (2019).

16.. Saba Akbar et al., Safety Concerns with Consumer-Facing Mobile Health Applications and Their Consequences: A Scoping Review, 27 J. Am. Med. Informatics Ass’n 330 (2020).

17.. John Mandrola, A Contrarian View of Digital Health, Quillette (May 17, 2019),

18.. John P. Erwin III & Debra Davidson, Wearables Offer Wealth of Data During COVID-19, but Liability Risks Remain, The Doctor’s Co. (Apr. 24, 2020),

19.. Elena Rodriguez-Villa & John Torous, Regulating Digital Health Technologies with Transparency: The Case for Dynamic and Multi-Stakeholder Evaluation, 17 BMC Med. 226 (2019).

20.. David L. Ferrera & Mara A. O’Malley, The New App Economy: Products Liability in an Increasingly Mobile World, BNA Elec. Com. & Law Rep., Feb. 14, 2017.

21.. Riegel v. Medtronic, Inc., 552 U.S. 312 (2008).

22.. Gary E. Marchant & Lucille M. Tournas, AI Health Care Liability: From Research Trials to Court Trials, 12 J. Health & Life Sci. L., no. 2, Feb. 2019, at 23.

23.. Erwin & Davidson, supra note 18.

24.. See, e.g., Ruha Benjamin, Assessing Risk, Automating Racism, 366 Science 421 (2019).

25.. Darshali A. Vyas, Leo G. Eisenstein & David S. Jones, Hidden in Plain Sight—Reconsidering the Use of Race Correction in Clinical Algorithms, N. Engl. J. Med. (June 17, 2020),

26.. Nicol Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms, Brookings (May 22, 2019),

27.. Marguerite Reardon, Your Alexa and Fitbit Can Testify Against You in Court, CNET.Com (Apr. 5, 2018),

28.. Doug K.W. Landau, “Smart” Evidence Tracking, Trial, Aug. 2018, at 56.

29.. Nicole Chauriye, Wearable Devices as Admissible Evidence: Technology Is Killing Our Opportunity to Lie, 24 Cath. U. J.L. & Tech. 495 (2016).

The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Gary Marchant

Gary Marchant is Regents Professor and Faculty Director, Center for Law, Science & Innovation, Sandra Day O’Connor College of Law, Arizona State University.