Generally, all laws governing data privacy can bear upon AI use. Users must comply with broad regulations, such as the Defend Trade Secret Act of 2016 (DTSA) which provided a federal civil remedy for the misappropriation of trade secrets, as well as data-specific laws (e.g., Health Insurance Portability and Accountability Act (HIPAA), Health Information Technology for Economic and Clinical Health Act (HITECH), Children’s Online Privacy Protection Act (COPPA), Gramm-Leach-Bliley Act (GLBA), Fair Credit Reporting Act (FCRA), and where applicable, EU General Data Protection Regulation (GDPR) 2016/679, and the California Consumer Privacy Act (CCPA), Cal. Civil Code § 1798.100 et seq.), where AI uses personal data. Recent AI-specific regulations and case law emphasize the necessity of consumer data protection and have changed the dialogue around the mineable value stored in a person’s sensitive data.
While there is currently no comprehensive privacy law in the United States, other federal laws and federal developments bear upon the use of data in AI and automated analysis systems. This includes:
The Fair Credit Reporting Act (FCRA), 15 U.S.C. § 1681 et seq. promotes the fairness, accuracy, and privacy of all stored consumer information. https://www.govinfo.gov/content/pkg/USCODE-2011-title15/pdf/USCODE-2011-title15-chap41-subchapIII.pdf
FTC Guidance for Using Artificial Intelligence and Algorithms outlines the best business practices for managing consumer protection risks with AI machine learning and decision making, including transparency, fairness, and clarity in AI algorithms. https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms
The pending Algorithmic Accountability Act of 2022 proposes new transparency and oversight of software, algorithms, and automated systems through requiring covered entities to perform impact assessments on processes that implicate legal and material effects on a consumer. https://www.congress.gov/bill/117th-congress/house-bill/6580/text
The Department of Defense (DOD) promulgated the DOD Joint All-Domain Command and Control ( JADC2) Implementation Plan, to enable the Joint Force to use AI and predictive analytics in battle. https://www.defense.gov/News/Releases/Release/Article/2970094/dod-announces-release-of-jadc2-implementation-plan
The Department of Energy (DOE) established the inaugural Artificial Intelligence Advancement Council (AIAC) to coordinate AI activities in the DOE. https://www.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council
The Office of the Director of National Intelligence (ODNI) Intelligence Advanced Research Projects Activity announced a Biometric Recognition & Identification at Altitude and Range program to research whole-body biometric identification from long distances. https://www.odni.gov/index.php/newsroom/press-releases/press-releases-2022/item/2282-iarpa-launches-new-biometric-technology-research-program?tmpl=component&print=1.
The Internal Revenue Service (IRS) abandoned its facial recognition technology for authenticating taxpayers’ online accounts after facing bipartisan backlash. https://www.irs.gov/newsroom/irs-announces-transition-away-from-use-of-third-party-verification-involving-facial-recognition
The National Institute of Standards and Technology (NIST) released drafts of “AI Risk Management Framework” for public comment and updated its “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” special publication, encouraging standardization in AI algorithms to minimize unintentional biases. https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf
The White House Office of Science and Technology Policy (OSTP) recently issued its Blueprint for an AI Bill of Rights, which seeks to help guide the design, development, and deployment of AI and automated systems so that they protect the rights of the American public. The AI Bill of Rights is designed to apply broadly to all automated systems that have the “potential” to significantly impact individuals or communities concerning matters that include privacy, civil rights, equal opportunities for healthcare, education, employment, and access to resources and services.
AI continues to offer a myriad of benefits when used in commercial operations—including increased efficiency, reduced costs, enhanced customer experiences, and smarter decision-making, among others. At the same time, however, growing reliance on these tools has also garnered increased interest from lawmakers and regulators concerned about potential fairness and bias issues associated with the use of this technology. In June 2022, the Federal Trade Commission (FTC) issued its Combatting Online Harms Through Innovation: A Report to Congress, in which the agency signaled its positions on AI and intent to enhance its enforcement efforts in connection with improper uses of algorithmic decision-making tools. More recently, on August 11, 2022, the FTC reemphasized the priority focus it has placed on policing AI with the issuance of its Advanced Notice of Proposed Rulemaking on commercial surveillance and lax data security practices (“Commercial Surveillance ANPR”), a large portion of which focuses on issues relating to AI and whether the FTC should promulgate new rules to regulate or otherwise limit the use of these advanced technologies. At the same time, the Consumer Financial Protection Bureau (CFPB) also recently released its Circular 2022-03: Adverse Action Notification Requirements in Connection With Credit Decisions Based on Complex Algorithms, which cautions creditors of the need for compliance with the Equal Credit Opportunity Act (ECOA) when making credit decisions with the aid of complex algorithms.
The US Equal Employment Opportunity Commission (EEOC) has also signaled its intent to closely scrutinize the use of AI tools in hiring and employment decisions to ensure that employers and vendors use these technologies fairly and consistently with federal equal employment opportunity laws. In May, the EEOC issued The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees—extensive guidance designed to assist employers in avoiding violations of the Americans with Disabilities Act (ADA) when using AI to assess job candidates and employees. The EEOC guidance provides a detailed discussion of the primary ways in which the use of AI tools can result in disability discrimination while also offering several “promising practices” that employers can implement to comply with the ADA when leveraging the benefits of AI technologies. Of note, within just a few days of issuing its guidance, the EEOC filed a federal age discrimination suit against a software developer alleging that its application software engaged in intentional discrimination in violation of the Age Discrimination in Employment Act (ADEA) through programming that solicited birthdates and automatically rejected applicants based on their age.
Further, the past decade holds significant case law and policy-advocacy efforts to refine regulations around AI data use and personal data access. This includes a significant amount of activity at the state level. Next year, a total of five new consumer privacy laws will go into effect. First, the California Privacy Rights Act, or CPRA, goes into effect on January 1, 2023. Also on January 1 of next year, the Virginia Consumer Data Protection Act, or VCDPA, will go into effect. Several months later, both the Colorado Privacy Act, or CPA, and the Connecticut Privacy Act, or CTPA, will take effect on July 1, 2023. Lastly, the Utah Consumer Privacy Act, or UCPA, will go into effect on December 31, 2023. These laws include certain consumer rights regarding opt-outs for the processing of their personal information and additionally provide for certain requirements that bear upon automated decision making/AI.
In 2022, over 100 privacy bills were introduced at the state level, many of which if enacted would regulate AI. Moreover, states including Colorado, Illinois, and Vermont created working cohorts to study the legal implications with the development of AI capabilities. Additional notable developments are highlighted below.
California:
Enacted CA A.B. 2273. The California Age-Appropriate Design Code Act authorizes the Attorney General to seek an injunction or civil penalty against any business negligently or intentionally violating the Act in its data use to profile and market to children.
Pending CA A.B. 2408. The Social Media Platform Duty to Children Act prohibits social media platforms from “using a design, feature, or affordance that the platform knew, or by the exercise of reasonable care should have known, causes a child user, as defined, to become addicted to the platform,” and provides civil relief to valid claims.
Colorado:
Enacted CO S.B. 113. The Act establishes a task force to study facial recognition services (FRS) and provide guidance as to conduct additional research on the needs of consumer data privacy in such AI functions.
District of Columbia:
Pending DC B 558. Prohibits algorithmic decision making data users from discriminatory action and requires notice of personal data use.
Illinois:
Enacted IL H.B. 53. The Act refines the Illinois Artificial Intelligence Video Interview Act of 2020, requiring employers solely relying upon AI to analyze interview videos in deciding whether to offer an applicant a live in-person interview must submit demographic information to the Department of Commerce and Economic Opportunity for analysis. The Department must then report whether the provided demographic data demonstrates a racial bias in the use of AI to the Governor and General Assembly.
Pending IL H.B. 69. The bill amends the University of Illinois Hospital Act and the Hospital Licensing Act to provide that before using any diagnostic algorithm, a hospital must first confirm that the algorithm is certified by the Department of Public Health and the Department of Innovation and Technology, achieves at least as accurate diagnostic results than other means, and is not the only diagnostic method available to the patient.
Enacted IL H.B. 645. The Illinois Future of Work Act establishes a working group to identify and assess the new technologies that may significantly affect employment and provide subsequent guidance.
Pending IL H.B. 1811. The bill amends the Equal Pay Act and the Consumer Fraud and Deceptive Business Practices Act to provide that when an organization uses predictive data analytics in determining creditworthiness and hiring decisions, data correlated with race and zip code must be omitted. The bill further amends the Human Rights Act to permit the use of racial and residential data for specific purposes.
Massachusetts:
Pending MA H.B. 119. The bill establishes a commission to oversee state agency automated decision-making, artificial intelligence, transparency, fairness, and individual rights.
Pending MA H.B. 136. The bill amends the General Laws to provide that data aggregators utilizing automated decision systems “shall perform: (i) continuous and automated testing for bias on the basis of a protected class; and (ii) continuous and automated testing for disparate impact on the basis of a protected class as required by the agency.”
Pending MA H.B. 142. The bill establishes the Massachusetts Information Privacy Act, where in part, requires covered entities to disclose the use of automated decision systems.
Pending MA H.B. 4029. The bill establishes a two-year period for the creation of regulations for algorithmic accountability and bias prevention in the protection of consumers when a covered entity is processing personal information in operations.
Pending MA H.B. 4152. The bill requires data controllers to disclose any use of automated decision-making systems, including profiling, provide meaningful information about system algorithms and the significance of the data processing.
Michigan:
Pending MI H.B. 4439. The bill amends the Michigan Employment Security Act to require the unemployment security agency reviews of computer system algorithms and logic formulas.
New Jersey:
Pending NJ A.B. 168. Requires the Commissioner of Labor and Workforce Development to conduct a study and issue a report on impacts of artificial intelligence on the state’s economic development.
New York:
Pending NY A.B. 2414. The bill amends the Labor Law by establishing a commission of the future of work to research and report the impact of technology on workers, employers, and the economy of the state.
Pending NY A.B. 3082. The bill amends the Insurance Law to prohibit motor vehicle insurers from discrimination on the basis of socioeconomic factors in algorithms used to construct actuarial data.
Pending NY A.B. 6042. Coined the Digital Fairness Act, the bill provides that specified governmental agencies and corporations shall not gather nor use information from an automated decision system before conducting and publishing online a third-party automated decision system impact assessment.
Pennsylvania:
Pending PA H.B. 1338. The Automated Decision Systems Task Force Act establishes an Automated Decision Systems Task Force to research the prevalence of the use of automated decision systems in the state and recommend next steps.
Rhode Island:
Pending RI H.B. 7223. The bill establishes a permanent commission to monitor the use of artificial intelligence in state government and offer recommendations.
Pending RI H.B. 7230. The bill prohibits insurers from using external consumer data and decision making algorithms using external consumer data to discriminate on a protected basis.
Vermont:
Enacted VT H.B. 410. The Act establishes the Vermont Artificial Intelligence Task Force to research and presents a series of recommendations for policies and actions for the use of AI in the state.
Washington:
Enacted WA S.B. 5092. The Act appropriates funding for a working group on state use of automated decision making systems.
AI can provide significant benefit to human activities and can augment the decision-making process. However, failure to understand the technology and its impacts can also perpetuate human bias. Looking forward, as AI develops alongside other innovative technology, novel fields continue to emerge—such as robot law—pushing lawyer to remain diligent and proactive in legal research as to remain ahead and ensure fairness in the legal system.
EU—U.S. Framework
On October 7, 2022, “President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF).” Since then, the EU-U.S. DPF has been undergoing review by the European Commission, which is preparing a draft adequacy decision with a projected completion of “spring 2023.” Thereafter, the adequacy decision must survive the European Union’s (“EU”) adoption procedure before a “final adequacy decision” may be formally adopted. Even then, it is expected that it could take “two to three years” before it is clear whether the EU-U.S. DPF is here to stay, as it will “inevitably be subject to EU legal challenges once implemented.” However, to understand the purpose and likely effects of the EU-U.S. DPF, it’s necessary to be aware of how it came about.
In 1995, the European Union adopted the “Data Protection Directive is officially known as Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data . . . [which is] binding within the member states of the EU and regulates how personal data is collected and processed in the European Union.” The Data Protection Directive (Directive) was based on seven key principles, which until then had been non-binding guidelines in some EU member states. As a result of the Directive, “companies in the EU could not send personal information to countries outside the EU unless they satisfied one of the available transfer mechanisms, one of which is an adequacy decision that the receiving country has sufficient privacy protections in place.” While the Directive can arguably “be credited with creating one of the world’s leading paradigms for privacy protection,” there were criticisms that “often focused on the formalities imposed by the Directive (or by the transpositions thereof ), and the economic costs of compliance and unequal enforcement.”
In 2000, the U.S. and the EU reached the “Safe Harbor accord,” which allowed applicable “US companies that complied with” the Directive’s seven principles “and certified accordingly with the U.S. Government” to “transfer data from the EU to the US (the ‘Safe Harbor’) without relying on other available mechanisms, like Standard Contractual Clauses (SCCs).” The Safe Harbor Agreement between the U.S. and the EU persisted for over a decade, up until the leaks in 2013 regarding the National Security Agency’s (NSA) mass surveillance efforts led to the filing of Maximillian Schrems v. Data Protection Commissioner (Schrems I ) and its subsequent decision.
In October 2015, “the Court of Justice of the European Union (CJEU) delivered a judgment in Schrems I that invalidated the Safe Harbor Agreement,” finding that, among other things, “the ability of US government agencies to access electronic communications within the US violated EU privacy rights.” Fortunately for the “approximately 4,500 companies and organizations [that] were participating in Safe Harbor,” EU data protection authorities . . . announced a four-month grace period during which they agreed to not enforce the Schrems I decision while U.S. and EU officials continued negotiations on a new agreement.”
In February of 2016, the EU-U.S. Privacy Shield (Privacy Shield) was announced, and it “became operational on August 1, 2016.” The Privacy Shield was “longer and more detailed than the previous Safe Harbor accord,” with “16 mandatory supplemental principles that include[d] provisions on sensitive data, secondary liability, the role of DPAs, human resources data, pharmaceutical and medical products, and publicly available data” joining the previous seven principles. Additionally, the Privacy Shield “clarifie[d] an organization’s responsibilities for compliance, and provide[d] a model for binding arbitration to address “residual” complaints.” If “U.S.-based organization” wished to join the Privacy Shield program, “it needed to self-certifie[d] annually to the Department of Commerce, publicly committing to comply with the Framework’s principles and requirements that are enforceable under U.S. law.” Furthermore, “[w]hile decisions by organizations to participate in the Privacy Shield program are voluntary, once an organization opts in, effective compliance is compulsory.”
In 2018, the Directive was replaced by the General Data Protection Regulation (GDPR), “which increased privacy protections for EU residents.” The GDPR is arguably the “toughest privacy and security law in the world,” and is similar to the Directive in that it is guided by seven principles. In a judgment issued in July 2020 “known as Schrems II,” the CJEU decided “in relation to the requirements of the GDPR” that the Privacy Shield was “not a valid mechanism for transferring personal data from the EU to the United States” because it failed to ensure “an adequate level of protection for data transferred…given the breadth of U.S. data collection powers authorized in U.S. electronic surveillance laws and lack of redress options for EU citizens.”
Unlike with Schrems I, there was no grace period following Schrems II. However, both the EU and the U.S. have since provided guidance and recommendations to help organizations meet EU data protection obligations. It is believed that those obligations will largely remain the same under the EU-U.S. DPF, should it be enacted, given that the Schrems II decision “did not pertain to business requirements under the Privacy Shield” and “since the key changes agreed-upon under the EU-US DPF are directed at U.S. Government activities.” Thus, US organizations already complying with the GDPR and the guidance issued by the EU should experience little, if any, additional obligations.
Biometric Privacy Litigation/Enforcement Update
First BIPA Class Action Trial
The past year was significant for cases litigated under the Illinois Biometric Privacy Act (BIPA), arguably the most stringent biometric information privacy statute in the United States. On October 12, 2022, a jury in the federal court for the Northern District of Illinois returned the first jury verdict in a case brought under BIPA, Rogers v. BNSF Railway Co. The plaintiff representative and the class of over 45,000 truck drivers in Rogers alleged that BNSF had violated Section 15(b) of BIPA by improperly requiring drivers entering the railway’s facilities to provide their biometric information through a fingerprint scanner, without providing the requisite notice and consent. The plaintiffs also alleged BNSF had improperly disclosed biometric information to a third-party vendor providing the fingerprint-scanning services without informed consent in violation of Section 15(c) of BIPA, and was vicariously liable for the acts and omissions of that vendor. The plaintiff class sought statutory damages of $5,000 for each willful and/or reckless violation of BIPA or $1,000 for each negligent violation.
During the five-day trial, BNSF advanced a number of defenses to liability that were ultimately rejected by the jury. The railway’s principal contention was that it was not liable for violations allegedly committed by its third-party vendor, Remprex LLC, as BIPA had not incorporated common-law agency liability principles. BNSF claimed that the vendor was the entity collecting the employees’ biometric data (and not the railway itself ), and that BNSF was unable to even access the biometric information. However, after only an hour of deliberation, the jury rejected that argument and found BNSF liable for approximately 45,600 reckless or intentional violations of BIPA and awarded the class $228 million in statutory damages.
Following the landmark verdict, BNSF moved for a new trial, arguing that the “unprecedented judgment awarding Plaintiff and the class a nine-figure windfall despite their admission that they suffered no actual harm was not supported by the evidence at trial.” Alternatively, BNSF argued that any alleged violations of BIPA should have been considered negligent violations, not the “reckless and/or willful” standard imposing increased damages. While the motion for a new trial remains pending, should that motion be denied, BNSF has indicated it will appeal. The parties are also engaging in settlement discussions.
Several Issues Bearing upon Scope of BIPA in Civil Litigation Remain Undecided by the Illinois Supreme Court
Another important case for BIPA liability, Tims v. Black Horse Carriers, Inc., is currently pending at the Illinois Supreme Court. In September 2021, an Illinois appellate court determined that different provisions of BIPA are subject to different limitations periods. BIPA itself does not specify a limitations period, and Black Horse Carriers argued to the appellate court that the one-year limitations period from 735 ILCS 5/13-201 applied to BIPA generally as a statute “specifically applicable” for privacy actions, overcoming the default “catch all” five-year limitations period of 735 ILCS 5/13-205. The appellate court split the baby, reasoning that Section 201’s one-year limitations period applied to violations of BIPA Section 15(c) and 15(d) as those sections involve “publication” of biometric data, while the five-year limitations period of Section 205 applies to Sections 15(a), (b), and (e) as those sections do not involve publication of an individual’s biometric data. Black Horse Carriers appealed that ruling, and the Illinois Supreme Court heard oral arguments on September 22, 2022. The Illinois Supreme Court’s resolution of the statute of limitations issue will provide more certainty to Illinois employers and businesses in the future.
Retailers’ Use of AI Continues to Be Targeted by BIPA Class Actions
One of the largest recent trends in BIPA litigation that continued over the course of 2022 was the targeting of online retailers in class action lawsuits alleging violations of BIPA. Among other factors, this trend can be attributed to retailers’ extensive use of technology that at least allegedly appears (according to the plaintiff’s counsel) to implicate facial recognition and the availability of liquidated damages on a per violation basis under BIPA. For example, in two currently pending class actions, a proprietary technology platform company was sued for alleged BIPA violations in connection with its “Smart Coolers” technology, which displays targeted advertisements on digital screens in retail store refrigerator cases based on a customer’s age, gender, and emotional disposition. In those cases, the plaintiffs allege that the company’s technology monitors shoppers using customer detection analysis to interpret collected data using a “facial profiling system” and, in turn, ascertain an individual’s “age, gender, and emotional response.”
In addition, retailers have also faced a high volume of BIPA lawsuits in connection with their use of virtual try-on (VTO) tools, which utilize facial feature detection capabilities to allow users to virtually “try on” products, such as eyewear or cosmetics, to see how they might look on them prior to making a purchase by virtually placing the product on the user’s face. Importantly, despite the questionable nature of merits of the claims underlying these lawsuits, i.e., whether the VTO tools in question engage in scans of face geometry, the majority of defendants in these class actions have been unable to obtain dismissals at the motion to dismiss stage. Retailers are also being targeted for BIPA class lawsuits in a broad range of other contexts, such as the use of AI voice assistants that facilitate customers’ drive-thru orders, as well as restaurants’ use of automated voice order systems that enable customers to place orders over the phone.
Notable Settlements
In addition to regulatory developments, there were also notable BIPA settlements, as well as with the FTC that additionally bears upon biometric and AI-related issues.
OkCupid. The FTC filed a petition on 26 May 2022 to investigate Match Group, owner of OkCupid dating site, alleging that Clarifai, Inc., an AI firm, violated BIPA by harvesting facial data from OkCupid. The claim stems from a 2019 New York Times article asserting that Clarifai built its database of faces for biometric algorithm training using OkCupid user photos shared by OkCupid founder and Clarifai investor in 2014. https://fingfx.thomsonreuters.com/gfx/legaldocs/lbpgnxkmxvq/frankel-ftcvmatch--petition.pdf
WeightWatchers. The FTC settled with WW International, Inc., and its subsidiary Kurbo, Inc. on 4 March 2022, after finding that the companies marketed a weightloss app to children as young as eight years of age and subsequently processed underage user personal data without parental consent. The settlement orders both companies to delete all unconsented personal data from children under thirteen years, destroy any algorithms derived from the illegally-obtained data, and fulfill a $1.5 million penalty. https://www.ftc.gov/system/files/ftc_gov/pdf/wwkurbostipulatedorder.pdf
Snap. In August, Snap, the parent company of photo-sharing platform Snapchat, reached a $35 million settlement to resolve ongoing litigation which alleged that the company improperly collected biometric data in violation of BIPA through its Lenses feature (which allows users to add special effects to their Snapchat images) and its Filters feature (which allows users to overlay images onto a pre-existing image framework). The case is Boone v. Snap Inc., No. 2022 LA 708 (Ill. Cir. Ct. DuPage Cnty.).
Litigation
California Attorney General Settles with Sephora Under CCPA
On August 24, 2022, California Attorney General Rob Bonta announced the Office of Attorney General (OAG) reached a settlement with Sephora, a beauty and cosmetic company, over the company’s alleged violation of the California Consumer Privacy Act (CCPA). The settlement is the first public CCPA fine and marks a significant change in California privacy enforcement as the “kid gloves are coming off,” according to AG Bonta.
The enforcement action against Sephora was a result of the OAG’s “enforcement sweep of large retailers to determine whether they continued to sell personal information when a consumer signaled an opt-out via the [Global Privacy Control].” The OAG used commercially available browser extensions to monitor network traffic before and after initiating the Global Privacy Control (GPC) while on Sephora’s website and observed no changes in network traffic. The OAG informed Sephora of its CCPA violations on June 25, 2021, but Sephora failed to cure its violations within the 30-day period allowed under CCPA.
In the complaint filed against Sephora on August 23, 2022, the OAG alleges Sephora sold consumer data to third parties despite telling California customers in its privacy policy that the company does not sell personal information. The OAG explains that under CCPA a company can be deemed to have sold consumers’ personal information if they make available consumer personal information to a third party and receive a benefit from the arrangement. Here, Sephora used third-party tracking software on its website that allowed third parties to create profiles about Sephora’s customers without their knowledge or consent. For instance, the tracking software can determine whether customers are accessing the website from a MacBook or Dell, precise geolocation of customers, and types of products customers placed in their digital shopping cart. The data collected on consumer behavior were used to generate profiles which Sephora then used to purchase targeted advertisements and “used for the benefit of other businesses.”
In addition, the OAG alleged Sephora failed to detect and comply with consumers’ request to opt-out of sale of their personal information through user-enabled global privacy controls. GPCs are universal opt-out signals that consumers can broadcast “across every website they visit” to not sell their personal information. Under the CCPA, a universal opt-out signal is required to be treated as if the consumer clicked on the “Do Not Sell My Personal Information” link on the website.
Under the settlement agreement, Sephora will pay a fine of $1.2 million dollars, update the privacy policies on its website and mobile app, and implement a program that will monitor Global Privacy Control signals from consumers to opt-out of sale of personal information. Sephora must provide an annual report to the OAG for the next two years of its ongoing efforts to remain compliant with CCPA and the settlement agreements.
Prior to the enforcement action, many privacy attorneys did not interpret the CCPA statute to require companies to be currently GPC compliant. But the OAG’s enforcement action against Sephora sends a clear message that companies should be compliant with the GPC provision now rather than later.
Doe v. Partners Healthcare System, Inc., Suffolk Superior Court, C.A. No. 1984CV01651-BLS1
Mass General Brigham Incorporated and their owned-and-operated healthcare providers, as well as Dana-Farber Cancer Institute, Inc. (collectively, the “Defendants” as further defined herein), operate publicly accessible informational websites, which are available to the general public and do not require any login, username, or password to access. The Informational Websites do not require any type of registration or account creation and do not require any website visitor to provide proof of identity or to otherwise self-identify. The Informational Websites provide general information about the programs and offerings at the Defendants, and use third-party website analytics tools, cookies, pixels, and related technologies. Plaintiffs John Doe and Jane Doe (the “Doe Plaintiffs”) filed this lawsuit asserting various legal claims on behalf of a putative class of website users who were also patients. Plaintiffs allege that the Defendants did not obtain sufficient consent when placing third party analytics tools, cookies, and pixels on their general and publicly accessible websites. Plaintiffs further allege that when using the Informational Websites, code on the Informational Websites caused the Plaintiffs’ internet browsers to disclose information about Plaintiffs’ internet use to third-parties through these analytics tools, cookies, pixels, and related technologies. The Doe Plaintiffs also filed a motion for preliminary injunction seeking to enjoin the Defendants from using the third-party website analytics tools, cookies, pixels, and related technologies on the general and publicly accessible Informational Websites.
On November 20, 2020, the Suffolk Superior Court denied Plaintiffs’ motion for preliminary injunction, after Defendants had revised the cookie banners and privacy disclosures on the Informational Websites, and granted in part, and denied in part, Defendants’ motion to dismiss the case. The Defendants deny the Doe Plaintiffs’ allegations, deny any wrongdoing and any liability whatsoever, and believe that no Settlement Class Members, including the Doe Plaintiffs, have sustained any damages or injuries due to the use of third-party website analytics tools, cookies, pixels, and related technologies on the general and publicly accessible Informational Websites. The Defendants maintain that they were prepared to defend vigorously this lawsuit. The settlement is not an admission of wrongdoing or an indication that the Defendants have violated any laws.