B. Brown v. Google LLC and Calhoun v. Google LLC
Plaintiffs in Brown v. Google LLC, seeking to represent two nationwide classes, are Google account holders who used their browser in private browsing mode (which the Chrome browser calls “Incognito mode”). Plaintiffs alleged that Google collects private data from them while they are in private browsing mode “through means that include Google Analytics, Google ‘fingerprinting’ techniques, concurrent Google applications and processes on a consumer’s device, and Google’s Ad Manager.” Plaintiffs additionally alleged that Google can tell when a Chrome user enables private browsing mode. Plaintiffs alleged that they relied on Google’s representations that it would not collect their private data while they were in private browsing mode. Plaintiffs brought claims for invasion of privacy under California and federal law.
Similarly, plaintiffs in Calhoun v. Google LLC sought to represent Google Chrome browser users who “chose not to ‘Sync’ their [Chrome] browsers with their Google accounts while browsing the web.” Plaintiffs alleged that Google collects data from users of Google Chrome regardless of whether a user is logged-in to her Google account. Plaintiffs alleged they relied on Google’s promises that Chrome users “don’t need to provide any personal information to use Chrome” and that the “personal information that Chrome stores won’t be sent to Google unless you choose to store that data in your Google Account by turning on sync.”
In both Brown and Calhoun, Google moved to dismiss all claims based on consent and statutes of limitations. Google argued that plaintiffs in both cases consented to Google’s collection of data. Google additionally moved to dismiss the claims under the Wiretap Act and Stored Communications Act, arguing that the websites, through which plaintiffs’ data was collected, consented to Google’s receipt of the data. Moreover, Google argued that all claims in both cases are barred by the statutes of limitations. While the facts are different, the court used the same reasoning to dismiss Google’s motion to dismiss in both cases.
The court held that Google did not demonstrate plaintiffs’ consent because Google did not notify users that Google engages in the alleged data collection. Consent is a defense to plaintiffs’ claims, but it is the defendant’s burden to prove consent. The court held that consent must be “actual” and the disclosure must “explicitly notify” users of the practice at issue.
In Brown, the court reasoned that 1) Google’s Privacy Policy does not disclose that Google would collect plaintiffs’ private information while they were in private browsing mode and, thus, led a reasonable user to conclude that Google does not collect data from users in private browsing mode; and 2) Google affirmatively represents that it cannot view users’ activity when they are in private browsing mode. Therefore, the court held that Google did not show plaintiffs’ consent to Google’s collection of data in private browsing mode. The court reached the same conclusion in Calhoun, based on similar reasoning.
The court also denied Google’s motion to dismiss claims under the Wiretap Act and Stored Communications Act. Google argued that websites had given Google implied consent to intercept the users’ data. But the court held that, even assuming that Google has established that websites generally consented to the interception of their communications with users, “Google does not demonstrate that websites consented to, or even knew about, the interception of their communications with users who were using Chrome without sync.”
In assessing plaintiffs’ invasion of privacy and intrusion upon seclusion claims the court applied a two-element standard: 1) whether plaintiffs had a reasonable expectation of privacy and 2) whether the intrusion was highly offensive. On the first element, the court considered “the amount of data collected, the sensitivity of the data collected, and the nature of the data collection,” as well as Google’s representations. On the second element, the court engaged in “a holistic consideration of factors such as the likelihood of serious harm to the victim, the degree and setting of the intrusion, the intruder’s motives and objectives, and whether countervailing interests or social norms render the intrusion inoffensive.” The court concluded that the plaintiffs’ allegations were adequate to support both elements.
C. Schrems II
The issue in the case commonly referred to as Schrems II is whether Facebook Ireland should be prohibited from transferring personal data of Maximillian Schrems, an Austrian national residing in Austria using Facebook, to Facebook Inc. in the United States. Schrems argued that the United States did not ensure an adequate level of protection of personal data transferred from the European Union (“EU”) to the United States. The Court of Justice of the European Union agreed, holding that the Privacy Shield Decision (“PSD”) is invalid.
In the PSD, the European Commission had determined that “the United States ensures an adequate level of protection for personal data transferred from the Union to self-certified organisations in the United States under the EU-U.S. Privacy Shield.” In evaluating that determination, the court considered two issues: 1) whether the limitations on the protection of personal data under U.S. law are delimited in a sufficiently clear and precise manner; and 2) whether effective administrative and judicial redress exists for an individual to pursue a legal remedy for unlawful processing of his or her personal data.
On the first element, the court held that the U.S. surveillance programs based on section 702 of the Foreign Intelligence Surveillance Act (“FISA”) and Executive Order (“E.O.”) 12333 do not lay down clear and precise rules for limiting their power to interfere with the fundamental rights conferred by the Charter. The court, therefore, held that the surveillance programs in the PSD fail to include adequate safeguards.
On the second element, the court held that no effective judicial remedy exists with respect to the U.S. surveillance programs. The court found that section 702 of FISA and E.O. 12333 do not grant data subjects rights enforceable in the courts against U.S. authorities. E.O. 12333 additionally states that at least some legal bases that U.S. intelligence authorities may use are not covered. Thus, the court held that the Privacy Shield Decision also does not meet the second element.
The court’s holding in Schrems II poses significant challenges to U.S. businesses or private entities in requesting the transfer of personal data from the EU to the United States. To overcome these challenges, the U.S. government clarified the meaning of existing legislation to demonstrate that the use of personal data is delimited in a clear and precise manner and that the United States has redress mechanisms in place for those individuals harmed by the transfer of personal data. In September 2020, a trio of U.S. government agencies published a white paper titled Information on U.S. Privacy Safeguards Relevant to SCCs and Other EU Legal Bases for EU-U.S. Data Transfers After Schrems II (“White Paper”). The White Paper explains that it is “not intended to provide companies guidance about EU law or what positions to take before European courts or regulators,” but rather provides an “up-to-date and contextualized discussion of . . . U.S. law and practice.” Three main points of the White Paper are: (i) most U.S. companies do not engage in data transfers that are of interest to U.S. intelligence agencies and, therefore, do not pose types of risks that are concerned in Schrems II; 2) the U.S. government frequently shares intelligence information with EU Member States for terrorism or related purposes and such information-sharing serves important EU public interests; and 3) Schrems II does not take account of new developments in the United States since the Privacy Shield Decision.
Private entities that previously relied on the Privacy Shield framework may wish to make use of what are known as Standard Contractual Clauses governing the transfer of personal data between the EU and the United States. As the White Paper explains, companies that take this route “are responsible for undertaking their own independent analyses of all relevant and current U.S. law relating to intelligence agencies’ access to data, as well as the facts and circumstances of data transfers and any applicable safeguards.”
In other words, a well-worded contractual clause may strengthen a business’s claim that the business itself can provide a sufficient level of protection for personal data transferred from the EU to the United States
III. Biometric Privacy
Illinois’s Biometric Information Privacy Act (“BIPA”), enacted in 2008, was the first state statute that regulated the use of an individual’s biometric information. BIPA defines a “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” BIPA offers a justification for regulating the use of biometric identifiers and information: “[b]iometrics . . . are biologically unique to the individual; therefore, once compromised, the individual has no recourse, is at heightened risk for identity theft, and is likely to withdraw from biometric-facilitated transactions.”
BIPA provides a private right of action, allowing individuals, often representing a class, to bring lawsuits against corporations for violations of BIPA. A notable class action case is In re Facebook Biometric Information Privacy Litigation, in which the U.S. District Court for the Northern District of California approved a $650 million settlement for a class of more than 6 million individuals. The plaintiffs in this action alleged that Facebook violated BIPA section 15(a) and (b) by collecting and storing users’ biometric data, such as digital scans of their faces, without prior notice and consent in connection with Facebook’s Tag Suggestions feature for users’ uploaded photos.
Courts have reached different results on the issue of whether a violation of BIPA meets the injury-in-fact requirement for Article III standing. The Seventh Circuit held in Thornley v. Clearview AI, Inc. that merely alleging the defendant’s violation of BIPA section 15(c) without alleging that the plaintiffs suffered any injury as a result of the violation does not meet the injury-in-fact requirement. Because the allegation describes “only a general, regulatory violation,” the court affirmed the district court’s decision to remand the case to state court. In contrast, the same court in Fox v. Dakkota Integrated Systems, LLC held that the defendant’s alleged failure to develop and comply with a data-retention schedule as required by BIPA section 15(a) gave rise to harm satisfying the injury-in-fact requirement. Earlier cases too reached different outcomes. The outcomes of the cases depend on whether the plaintiffs stated claims under a particular BIPA provision and alleged a particularized harm or risk of harm as a result of the violation of BIPA.
Following the enactment of BIPA, several other states enacted or are in the process of enacting comparable legislation. Texas and Washington adopted statutes in prior years, but neither of these includes a private right of action. Maryland and New York are each considering similar bills. As with BIPA, Maryland’s and New York’s bills provide a private right of action.
The major differences between BIPA and the bills in New York and Maryland rest on the definition of biometric identifier and the notice requirement. First, while the New York bill mirrors the definition of biometric identifier in BIPA, the Maryland bill excludes a specific reference to “scan of hand or face geometry.” Instead, Maryland generally refers to “other unique biological” patterns or characteristics used to identify a specific individual and further adds “generic print” to the definition. Second, BIPA and the New York bill require that the entity seeking to collect or obtain an individual’s biometric identifier must provide notice of the collection and obtain the individual’s consent. Maryland’s bill includes no such requirement.
Some state and local governments, in recognition that the facial recognition technology reinforces racial biases and contributes to privacy erosion, have banned or limited certain uses of facial recognition technology. Virginia has banned the use of facial recognition by police, and Washington has imposed limitations on government’s use of facial recognition technologies. Some cities, including San Francisco, Minneapolis, and Boston, passed ordinances prohibiting the use of facial recognition technologies by police and/or city officers. The city of Portland (Oregon) has prohibited the use of facial recognition technologies by private parties in places of public accommodation.
In 2021, the U.S. House of Representatives passed the George Floyd Justice in Policing Act, prohibiting any use of facial recognition technologies by federal law enforcement officers wearing a body camera. The Act provides that “[n]o camera or recording device authorized or required to be used under this part may be equipped with or employ facial recognition technology, and footage from such a camera or recording device may not be subjected to facial recognition technology.”
IV. Conclusion
The privacy law landscape continues to evolve. Data security and biometric privacy will continue to remain top concerns. Staying ahead of the various issues will force businesses and companies to evolve and keep pace. Companies will need to continue to develop policies and procedures to combat the increasing litigation and new technologies will need to continue to monitor the ever changing landscape of U.S. privacy law.