The decision to use voice-controlled digital assistants, like Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and the Google Assistant, may present a Faustian bargain. While these technologies offer great potential for improving quality of life, they also expose users to privacy risks by perpetually listening for voice data and transmitting it to third parties.
Adding a voice-controlled digital assistant to any space presents a series of intriguing questions that touch upon fundamental privacy, liability, and constitutional issues. For example, should one expect privacy in the communications he engages in around a voice-controlled digital assistant? The answer to this question lies at the heart of how Fourth Amendment protections might extend to users of these devices and the data collected about those users.
Audio-recording capabilities also create the potential to amass vast amounts of data about specific users. The influx of this data can fundamentally change both the strength and the nature of the predictive models that companies use to inform their interactions with consumers. Do users have rights in the data they generate or in the individual profile created by predictive models based on that user’s data?
On another front, could a voice-controlled device enjoy its own legal protections? A recent case questioned whether Amazon may have First Amendment rights through Alexa. Whether a digital assistant’s speech is protected may be a novel concept, but as voice-controlled digital assistants become more “intelligent,” the constitutional implications become more far-reaching.
Further, digital assistants are only one type of voice-controlled device available today. As voice-controlled devices become more ubiquitous, another question is whether purveyors of voice-controlled devices should bear a heightened responsibility towards device users. Several security incidents related to these devices have caused legislators and regulators to consider this issue, but there remains no consensus regulatory approach. How will emerging Internet-of-Things frameworks ultimately apply to voice-controlled devices?
Voice-Activated Digital Assistants and the Fourth Amendment
Voice-activated digital assistants can create a record of one’s personal doings, habits, whereabouts, and interactions. Indeed, features incorporating this data are a selling point for many such programs. Plus, this technology can be available to a user virtually anywhere, either via a stand-alone device or through apps on a smartphone, tablet, or computer. Because a digital assistant may be in perpetual or “always-on” listening mode (absent exercise of the “mute” or “hard off” feature), it can capture voice or other data that the user of the device may not intend to disclose to the provider of the device’s services. To that end, users of the technology may give little thought to the fact their communications with digital assistants can create a record that law enforcement (or others) potentially may access by means of a warrant, subpoena, or court order.
A recent murder investigation in Arkansas highlights Fourth Amendment concerns raised by use of voice-controlled digital assistants. While investigating a death at a private residence, law enforcement seized an Amazon Echo device and subsequently issued a search warrant to Amazon seeking data associated with the device, including audio recordings, transcribed records, and other text records related to communications during the 48-hour period around the time of death. See State of Arkansas v. Bates, Case No. CR-2016-370-2 (Circuit Court of Benton County, Ark. 2016).
Should one expect privacy in the communications he engages in around a voice-activated digital assistant? The Arkansas homeowner’s lawyer seemed to think so: “‘You have an expectation of privacy in your home, and I have a big problem that law enforcement can use the technology that advances our quality of life against us.’” Tom Dotan and Reed Albergolti, “Amazon Echo and the Hot Tub Murder.” The Information (Dec. 27, 2016) (hereinafter “Dotan”).
To challenge a search under the Fourth Amendment, one must have an expectation of privacy that society recognizes as reasonable. With few exceptions, one has an expectation of privacy in one’s own home, Guest v. Leis, 255 F.3d 325, 333 (6th Cir. 2001), but broadly, there is no reasonable expectation of privacy in information disclosed to a third party. Any argument that a digital-assistant user has a reasonable expectation of privacy in information disclosed through the device may be undercut by the service provider’s privacy policy. Typical privacy policies provide that the user’s personal information may be disclosed to third parties who assist the service provider in providing services requested by the user, and to third parties as required to comply with subpoenas, warrants, or court orders.
The Bates case suggests that data collected by digital assistants would bear no special treatment under the Fourth Amendment. The police seized the Echo device from the murder scene and searched its contents. Unlike a smartphone that would require a warrant to search its contents, see Riley v. California, 134 S. Ct. 2473, 2491 (2014), the Echo likely had little information saved to the device itself. Instead, as an Internet-connected device, it would have transmitted information to the cloud, where it would be processed and stored. Thus, the Arkansas law enforcement obtained a search warrant to access that information from Amazon.
Under existing law, it is likely a court would hold that users of voice-activated technology should expect no greater degree of privacy than search engine users. One who utilizes a search engine and knowingly sends his search inquiries or commands across the Internet to the search company’s servers should expect that the information will be processed, and disclosed as necessary, to provide the requested services.
Perhaps there is a discernible difference in that voice data, to the extent a service provider records and stores it as such, may contain elements that would not be included in a text transmission. For example, voice data could reveal features of the speaker’s identity (such as a regional accent), state of mind (such as excitement or sadness), or unique physical characteristics (such as hoarseness after yelling or during an illness), that would not be present in text.
Or perhaps it is significant that some information transmitted might enjoy a reasonable expectation of privacy but for the presence of the device. Although digital-assistants usually have visible or audio indicators when “listening,” it is not inconceivable that a digital assistant could be compromised and remotely controlled in a manner contrary to those indicators.
Further, the device could be accidentally engaged, particularly when the “wake word” includes or sounds like another common name or word. This could trigger clandestine or unintentional recording of background noises or conversations when the device has not been otherwise intentionally engaged. See Dotan (“[T]he [Echo’s seven] microphones can often be triggered inadvertently. And those errant recordings, like ambient sounds or partial conversations, are sent to Amazon’s servers just like any other. A look through the user history in an Alexa app often reveals a trove of conversation snippets that the device picked up and is stored remotely; people have to delete those audio clips manually.”).
The technology of voice-activated digital assistants continues to advance, as evidenced by the recent introduction of voice-controlled products that include video capabilities and can sync with other “smart” technology. Increasing use of digital assistants beyond personal use will raise more privacy questions. As these devices enter the workplace, what protections should businesses adopt to protect confidential information potentially exposed by the technology? What implications does the technology have for the future of discovery in civil lawsuits? If employers utilize digital assistants, what policies should they adopt to address employee privacy concerns? And what are the implications under other laws governing electronic communications and surveillance?
First Amendment Rights for Digital Personal Assistants?
The Arkansas v. Bates case also implicates First Amendment issues. Amazon filed a motion to quash the search warrant, arguing that the First Amendment affords protections for both users’ requests and Alexa’s responses to the extent such communications involve requests for “expressive content.” The concept is not new or unique. For example, during the impeachment investigation of former President Bill Clinton, independent counsel, Kenneth Starr, sought records of Monica Lewinsky’s book purchases from a local bookstore. See In re Grand Jury Subpoena to Kramerbooks & Afterwords Inc., 26 Media L. Rep. at 1599 (D. D.C. 1998).
Following a motion to quash filed by the bookstore, the court agreed the First Amendment was implicated by the nature of expressive materials, including book titles, sought by the warrant. Ms. Lewinsky’s First Amendment rights were affected, as were those of the book seller, whom the court acknowledged was engaged in “constitutionally protected expressive activities.” Content that may indicate an expression of views protected by free speech doctrine may be protected from discovery due to the nature of the content. Government investigation of one’s consumption and reading habits is likely to have a chilling effect on First Amendment rights. See U.S. v. Rumely, 345 U.S. 41, 57-58 (1953) (Douglas, J., concurring); see also Video Privacy Protection Act of 1988, 18 U.S.C. § 2710 (2002) (protecting consumer records concerning videos and similar audio-visual material).
Amazon relied on the Lewinsky case, among others, contending that discovery of expressive content implicating free speech laws must be subject to a heightened standard of court scrutiny. This heightened standard requires a discovering party (such as law enforcement) to show that the state has a “compelling need” for the information sought (including that it is not available from other sources) and a “sufficient nexus” between the information sought and the subject of the investigation.
The first objection raised by Amazon did not involve Alexa’s “right to free speech,” but instead concerned the nature of the “expressive content” sought by the Echo user and Amazon’s search results in response to the user’s requests. The murder investigation in question, coupled with the limited scope of the request to a 48-hour window, may present a compelling need and sufficient nexus that withstands judicial scrutiny.
However, Amazon raised a second argument that Alexa’s responses constitute an extension of Amazon’s own speech protected under the First Amendment. Again, the argument is supported by legal precedent.
In Search King, Inc. v. Google Tech., Inc., an Oklahoma federal court held that Google’s search results were constitutionally protected opinion. 2003 WL 21464568 (W.D. Okla. 2003). More recently, a New York federal court determined that Baidu’s alleged decision to block search results containing articles and other expressive material supportive of democracy in China was protected by the First Amendment. Jian Zhang v. Baidu.com, Inc., 10 F.Supp.3d 433 (S.D.N.Y. 2014). Accordingly, no action could lie for injunctive or other relief arising from Baidu’s constitutionally protected decisions.
The court considered search results an extension of Baidu's editorial control, similar to that of a newspaper editor, and found that Baidu had a constitutionally protected right to display, or to consciously not display, content. The court also analogized to a guidebook writer’s judgment about which attractions to feature or a political website aggregator’s decision about which stories to link to and how prominently to feature them.
One unique issue that arises in the context of increasingly “intelligent” computer searches is the extent to which results are not specifically chosen by humans, but instead returned according to computer algorithms. In Baidu, the court was persuaded by the fact that the algorithms are written by humans and thus “inherently incorporate the search engine company engineers’ judgments about what materials” to return for the best results. By its nature, such content-based editorializing is subject to full First Amendment protection because a speaker is entitled to autonomy to choose the content of his message. In other words, to the extent a search engine might be considered a “mere conduit” of speech, First Amendment protection might be less (potentially subject to intermediate scrutiny), but when the search results are selected or excluded because of the content, the search engine, as the speaker, enjoys the greatest protection.
Search results arising from computer algorithms that power search engines and digital assistants may currently be considered an extension of the respective companies’ own speech (through the engineers they employ). Current digital assistants are examples of “weak artificial intelligence.” Thornier legal questions will arise as the artificial intelligence in digital assistants gets smarter. The highest extreme of so-called “strong” artificial intelligence might operate autonomously and be capable of learning (and responding) without direct human input. The First Amendment rights of such systems will no doubt be debated as the technology matures.