The COVID-19 pandemic, which has generated a surge in telehealth and introduced the concept of contact tracing into our daily lives, is likely to expose businesses and governments to an increased risk of data privacy and data breach class actions related to health and other personal data. This article discusses potential economic approaches and challenges to valuing, in class action settings, alleged unconsented use or misappropriation of health and other private data generated during this health crisis.
Upward Trend in Litigation Expected
The spike in the use of telehealth has been one of the dramatic changes in healthcare delivery since the beginning of the COVID-19 pandemic. Telehealth involves, among others, the practice of doctors caring for patients remotely through the use of such tools as teleconferencing and videoconferencing.
The ability to receive care without having to travel to healthcare facilities has increased the appeal of telehealth, including telemedicine visits, for many patients during the pandemic. According to an April 2020 study, there is a strong correlation between the U.S. population’s interest in telehealth and the number of COVID-19 cases. Similarly, an analysis published by the Commonwealth Fund shows that the share of physician visits conducted via telehealth was practically nonexistent in the first two months of 2020 and rose to nearly 14 percent by mid-April, as shown in this figure. While the share of telehealth visits has somewhat declined since that peak, it remains significantly higher than before the pandemic.
Given the sensitive nature of the data exchanged during telehealth visits and stored by telehealth providers, the use of such communication technologies raises concerns about susceptibility of health and other personal data to unauthorized disclosures, uses, or misappropriation by unauthorized third parties. To facilitate the expansion of telehealth during the pandemic, in March 2020 the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS) lifted privacy and security compliance penalties and enforcement actions against providers who use audio or video communication technology to provide remote healthcare services. As a result, providers can use various communication technologies, such as Facebook Messenger video chat, Google Hangouts video, Zoom, and Skype, to provide telehealth services without the risk that OCR might seek to impose a penalty for noncompliance with the relevant privacy and security protocols.
Privacy and security concerns associated with telehealth also extend to devices patients use to communicate and exchange data with their telehealth providers. For example, patients typically use smartphones, tablets, and computers as well as in-home patient monitors or other remote-care devices as part of the remote management of their healthcare. Personal information (including health data and payment information) entrusted to these connected devices can be accessed, or the devices may be used to infiltrate the larger networks of patient data.
Contact tracing is another change facilitated by the pandemic that has led to an increase in exchange of personal data among individuals, companies, and governments. Contact tracing is a public health management tool and involves identifying and monitoring individuals who had contact with infected individuals and notifying them of their potential exposure. While there is no compulsory digital COVID-19 contact tracing program in the U.S., multiple voluntary mobile apps developed by private companies exist.
With contact tracing initiatives, the COVID-19 status and geolocation of individuals are collected, stored, and also sometimes shared with various entities, raising data privacy and data breach concerns. Geolocation data collected from smartphones with contact tracing apps may be used in isolation or in combination with other data to uncover a variety of information about an individual, including routine activities (e.g., medical intake), interests (e.g., gym membership), and affiliations (e.g., religious affiliation). These data could be obtained by unauthorized parties (e.g., hackers), who can publicly disclose confidential information that has been collected. Hackers can also create “fake” contact tracing apps or send fake messages pretending (log-in required) to be contact tracers to initiate a malware attack or a phishing scam to extract credit card and other confidential information.
Accordingly, such changes instituted during the pandemic with regard to healthcare delivery and public health management are expected to increase class action litigation related to data privacy and data breaches in the healthcare industry.
Potential Economic Approaches and Challenges to Valuing Alleged Unconsented Use or Misappropriation of Health and Other Personal Data Generated During the COVID-19 Pandemic
Broadly, there are two types of consumer class actions related to personal data: (1) data privacy class actions in which the data at issue were allegedly misused by the parties that received the data, and (2) data breach class actions where the data at issue were exposed and improperly accessed by unrelated third parties. An example of the former is a federal lawsuit filed in 2018 against CVS Health alleging exposure of the personal health information of over 6,000 individuals via clear-windowed mailings revealing their names, addresses, and HIV status. An example of the latter involves lawsuits against Anthem (log-in required) following a data breach that allegedly exposed personal data on 80 million individuals, including names, birth dates, medical identification numbers, and Social Security numbers.
In data privacy class actions, damages pursued are often based on alleged loss of “intrinsic” value of privacy and unjust enrichment of the party that has misused the data. In data breach class actions, on the other hand, damages pursued are often based on actual fraud costs, future risk of identity theft, and identity theft monitoring and prevention costs. The economic approaches related to these theories of harm are discussed next, as they are likely going to arise in relation to potential telehealth and contact tracing data matters coming out of the COVID-19 pandemic.
“Intrinsic” Value of Privacy
The loss of “intrinsic” value of privacy theory is built on the premise that keeping information private has a uniform economic value that is common to all individuals (e.g., a societal value), and unauthorized access to this information by a third party would result in the loss of that value. Such a common, uniform value to privacy implies that the alleged injury is not specific to the circumstances of an individual or the infringing party, rendering an identical quantum of damages for each putative class member. For example, under this theory, unauthorized use of geolocation or health data exchanged as part of COVID-19 contact tracing initiatives would generate the same amount of damages for each putative class member regardless of the extent of information provided by a given individual. Similarly, unauthorized use of geolocation data accessed by means unrelated to alleged misconduct would generate identical damages (e.g., damages due to unauthorized use of geolocation data accessed via a gaming app would be the same as damages due to unauthorized use of geolocation data accessed via a contact tracing app). Further, public disclosure of the at-issue information in other contexts (e.g., an infected individual posting COVID-19 status in a public Facebook profile) is unlikely to matter under the loss of “intrinsic” value of privacy theory.
Findings in academic research are inconsistent with a uniform, common value of privacy. Research shows that privacy expectations and preferences vary across individuals due to factors specific to each individual (e.g., the type and amount of information at issue), individuals’ beliefs about the identities of the parties using the data, and the ways these parties make use of the data.
Survey-based, quantitative methods such as contingent valuation and conjoint analysis have been proposed as suitable methods to estimate invasion of privacy damages in data privacy class actions. Contingent valuation methods involve asking survey respondents directly about their value for data privacy (e.g., “How much would you pay to protect the privacy of your data?”), whereas conjoint studies involve estimating the value of privacy based on product or service choices that respondents make in a series of survey questions.
These methods have been subject to a number of critiques. First, these methods are “stated preference methods” in that they rely on what people say or imply they will do, and not on what they actually do. Second, in the privacy context, survey methods are subject to the so-called privacy paradox, the well-documented discrepancy between consumers’ stated preferences for privacy and their privacy-related behaviors (consumers assess a high value to data privacy when asked directly, but their actual behavior suggests much lower values for data privacy, or even zero). Third, both methods have been shown to generate inflated values for privacy due to certain biases these surveys are susceptible to (focalism bias in the case of conjoint surveys, based on the criticism that these surveys artificially focus survey respondents on privacy).
Further, the results of such survey methods are typically used to generate an “average” value of privacy and are extrapolated to the putative class as a whole. This can raise challenges due to the considerable variation typically observed among survey respondents and the extent of heterogeneity in privacy expectations and preferences demonstrated in the academic literature. Thus, reliably extrapolating the privacy value estimates beyond the survey samples is difficult at best.
An alternative theory of harm put forward in data privacy class actions is based on the allegation that the infringing third party generated revenues and profits by using private data without authorization. Generating reliable estimates of privacy value based on this theory requires the use of a method that can distinguish and isolate the portion of the infringer’s valuation, revenues, or profits that is directly attributable to the alleged misuse of private data. This can be a challenging exercise, as numerous factors (many unrelated to data privacy) may influence a firm’s valuation, revenues, and profits. For example, determining the value to companies involved in an alleged unauthorized use of geolocation and other private data shared with contact tracing apps would require controlling for all non-privacy factors that influence these companies’ valuation, revenues, and profits.
Actual Fraud Costs
In data breach class actions, one of the most commonly pursued types of damages involves actual fraud costs. In the case of breach of payment card data, for instance, this often involves determining fraudulent transactions and associated amounts on exposed accounts. However, because consumers typically share the same types of data with multiple parties and because concurrent data breach incidents have become increasingly common, it can be difficult to establish a nexus between fraudulent activity and a particular data breach incident. According to a 2019 industry study, for example, there were 1,473 data breaches in the U.S. in 2019 alone, and over 164 million personally identifiable records were exposed in those breaches. Similarly, a 2016 study showed that roughly 36 million U.S. adults received more than one notification of data breach between June 2014 and June 2015 alone. The actual number of data breaches is likely to be higher: A recent study estimated the number of unreported data breaches may be equal to 25 percent to 85 percent of the number of reported breaches. Overall, these factors need to be considered when establishing causality (i.e., that the loss was directly linked to a specific data breach incident).
Risk of Future Identity Theft
Harm arising from the risk of future identity theft is also commonly pursued in data breach class actions. This type of harm depends on the type of data that was breached and the future identity theft outcomes that may result from the breach. For example, if the breached data are limited to payment card data, an outcome such as fraudulent tax returns is probably unlikely. On the other hand, the harm from identity theft after a breach of Social Security and healthcare data could potentially involve opening new bank accounts, filing fraudulent tax returns, and committing medical and insurance fraud, among others.
The theory of harm based on the risk of future identity theft is based on the premise that the identity theft or other negative consequences of a data breach may not occur immediately. For that reason, it is argued that individuals whose information was breached should be compensated for the expected “long term” impact of the data breach. Historical evidence and academic literature, however, suggest that only a small number of individuals will experience any type of identity theft as a result of a data breach incident. Moreover, it is difficult to predict who will be affected: The probability that an individual will be subject to future identity theft can vary across individuals based on prior incidence of identity theft, number of companies that have access to the data at issue, and the type of data that was compromised. Further, any methods proposed to calculate this type of damages would need to be able to isolate the incremental risk associated with the data breach for each individual in the future.
Identity Theft Monitoring and Prevention Costs
Yet another common type of damages asserted in data breach class actions is based on what consumers allegedly already paid or would likely pay for credit and identity theft monitoring and prevention services. These may include a range of services such as “credit freezes” with credit reporting agencies, identity theft insurance, and credit monitoring services.
Given that not everyone would sign up for these types of services, determining which members of a proposed class incurred or would likely incur such costs is central to quantifying these damages. Survey methods soliciting self-reported measures from a sample of putative class members on the costs already incurred and the probability of signing up for credit monitoring or identity theft insurance services may be used. The validity of these methods will in part depend on the reliability of the self-reported measures and on the representativeness of the survey respondents.
In addition, real-world data may provide insight about the rate at which affected individuals are likely to sign up for credit and identity theft monitoring and prevention services. For example, many companies in the U.S. offer free credit monitoring services to individuals whose data were potentially exposed in a data breach incident. The share of individuals who sign up for these free services, which is typically low, can be informative of the share of individuals who would ultimately sign up and pay a fee for such services.
Vildan Altuglu is a vice president at Cornerstone Research in New York City, New York. Maria Salgado is a vice president at Cornerstone Research in San Francisco, California. Omur Celmanbet is a principal at Cornerstone Research in Washington, D.C. Rezwan Haque is a manager at Cornerstone Research in San Francisco, California. Lucia Yanguas is an associate at Cornerstone Research in Los Angeles, California. The views expressed in this article are solely those of the authors, who are responsible for the content, and do not necessarily represent the views of Cornerstone Research.
Copyright © 2020, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).