chevron-down Created with Sketch Beta.

ARTICLE

First Amendment Considerations in the Federal Regulation of Social Media Networks’ Algorithmic Speech, Part I

Veronika Balbuzanova

Summary

  • The article delves into the impact and legal considerations of social media algorithms, primarily focusing on Facebook's algorithms, which have come under increasing scrutiny.
  • What are the constitutional implications of regulating such algorithms under the First Amendment?
First Amendment Considerations in the Federal Regulation of Social Media Networks’ Algorithmic Speech, Part I
Luis Alvarez via Getty Images

Facebook has been wildly successful in bringing people together. Groups, pages, profiles, and private chats have enabled users to connect with everything and everyone—from a favorite celebrity or preferred brand to a social cause or long-lost friend. However, this roaring success has also been at the root of the company’s harshest criticism in its more than decade-long history. The proprietary algorithms that lie at the heart of the social media conglomerate have come under uniquely intense scrutiny for their role—which they continue to play—in amplifying divisiveness and inciting off-line violence across the globe. In much the same way that Facebook’s algorithms have fueled historic social movements such as the #MeToo and Black Lives Matter movements by connecting and bringing supporters of these social causes together, it has simultaneously enabled extremist views advocating violence, racism, terrorism, and even genocide to spread like wildfire through the same methodology.

These issues have translated into increasingly more pronounced calls for change in the industry. On a macro level, reforms are demanded both internally—whereby social media companies are pressured to strengthen their internal governance and user policies (and strictly enforce them)—and externally—whereby policymakers are expected to propose and pass meaningful legislation that adequately addresses and prevents these real-world consequences. Few articles, however, address the legal parameters of the latter option.

This two-part series does not purport to set out what the content of such proposed legislation should be, but rather what First Amendment free speech considerations must be taken into account when drafting such legislation. Suppose federal rules and regulations are to be imposed on the nature and type of algorithms that social media companies can use. In that case, there is real value in proactively analyzing the probability that such legislation would withstand a legal and, more specifically, a constitutional challenge. Part I of this series analyzes whether social network algorithms constitute protectable free speech. Part II analyzes what level of constitutional scrutiny laws that regulate such algorithms would likely be subject to (and whether they would pass constitutional muster).

The Effects of Social Media Algorithms

In a (wildly successful) attempt to ensure their continued growth and financial success, some social media platforms—Facebook and Instagram being the most prominent of the bunch—have contrived algorithms designed to draw in and hold users’ attention. The basic premise of a social network algorithm is that the more time a user spends on the app, i.e., the specific social media platform at issue, the more ads the algorithm will present to the user and, in turn, the more money the app will collect for presenting said ads. To that end, the social media monetization model is no different from the television model. Viewers tune in to watch the newest episode of their favorite show, but that content is not presented to them all at once, without interruption; they’re shown 15 minutes, if that, of the show followed by 4 minutes of commercials followed by show, commercials, show, commercials, and so on. For the most part, this is where the similarity ends. Because keeping users engaged in the app is in the social media companies’ best interests, they have a vested interest in showing users only content that will maximize the amount of time each user spends in the app (in contrast to television, where the content viewable at any given time by two unrelated viewers is identical to both). At the most fundamental level, that is the job—and the goal—of the algorithm. It analyzes all data given about the user and predicts which content will hold the user’s attention for another 30 seconds and another. (The data privacy considerations associated with the harvesting, storage, and use of these data are a wholly separate legal issue that is not addressed in this series, nor is the applicability and effect of existing federal law, including the Communications Decency Act of 1996.) Mainly, where the algorithm is driven by machine learning technology, its tendency to accurately and repeatedly select posts, pictures, videos, and notifications that capture the user’s attention for long periods at a time is so high that some critics have dubbed it borderline manipulation.

Popular posts—posts that rake in the highest engagement levels, such as likes, comments, and re-shares—are more likely to be picked up by the algorithm and to pop up automatically on users’ newsfeeds as suggested content. Until recently, the algorithm could not differentiate between “positive” posts, such as an inspirational #MeToo post, and “negative” posts, such as ones containing hate speech or misinformation. This resulted in real-world consequences in several countries and cultures.

In Myanmar, Facebook users and military figures disseminated hate speech in the form of posts, comments, and pornographic images targeting the Rohingya minority and other Myanmar Muslims. This resulted in mass violence, rape, and genocide against tens of thousands of Rohingya Muslims, while more than 750,000 of them fled the country. See Bansari Kamdar, “Facebook’s Problematic History in South Asia,” Diplomat, Aug. 19, 2020. Myanmar stands accused at the International Court of Justice, the principal judicial organ of the United Nations, for having conducted “a brutal campaign of killings, mass rape, arson and ethnic cleansing against the Rohingya.” See Steve Stecklow, “Inside Facebook’s Myanmar Operation Hatebook: A Reuters Special Report,” Reuters: Investigates, Aug. 15, 2018.

In Sri Lanka, hate speech and misinformation escalated ethnic tensions between Sinhalese Buddhists and Tamil Muslims, resulting in anti-Muslim violence, mob attacks, civil unrest, and at least two deaths in Kandy. See Kamdar, supra; Joshua Brustein, “Facebook Apologizes for Role in Sri Lankan Violence,” Bloomberg, May 12, 2020.

In India, the dissemination of inflammatory anti-Muslim hate speech on the social media platform by Bhartiya Janata Party (BJP) state legislator T. Raja Singh contributed to communal riots that ravaged the city of New Delhi. See Kamdar, supra; Devjyot Ghoshal & Alexander Smith, “Delhi city lawmakers summon Facebook India chief over February riots,” Reuters, Sept. 12, 2020.

Though the individual who posts such reprehensible content is the one undoubtedly responsible for its creation, the algorithm is largely responsible for its targeted and widespread dissemination. In other words, it is responsible for disseminating the content to users whose attention the algorithm knows, with some degree of certainty, will be captured by the content. Based on the plethora of data points and information the social media company already has about the user, it can successfully predict whether a post containing hate speech or violent rhetoric will receive the user’s attention and engagement, e.g., likes, comments, and shares. This naturally creates an echo chamber effect where such content is circulated among users who already sympathize, associate, or identify with the violent views or are predisposed to doing so. In this way, the algorithm easily recruits and enables sympathizers while other features of the social media platform—e.g., instant messaging, calendars of events, and posts—help them stay connected and informed.

The Basis for Legislation

Any legislative solution to the problem of social network algorithms will necessarily implicate the First Amendment of the United States Constitution. The law considers an algorithm to be just as much a form of free speech as, say, a musical score. Fundamentally, an algorithm comprises computer code that can theoretically be written by anyone and can be used to communicate or convey information. Accordingly, federal courts have consistently held that computer code can constitute protectable speech for purposes of the First Amendment and copyrightable subject matter entitled to federal copyright protection. See, e.g., Universal City Studios, Inc. v. Corley, 273 F.3d 429, 449 (2d Cir. 2001) (finding that computer code and computer programs constructed from code that convey information can merit First Amendment protection); Johnson Controls v. Phoenix Control Sys., 886 F.2d 1173, 1175 (9th Cir. 1989) (“Source code and object code, the literal components of a program, are consistently held protected by a copyright on the program.”); Green v. United States DOJ, 392 F. Supp. 3d 68, 86 (D.D.C. 2019) (“Code is speech precisely because, like a recipe or a musical score, it has the capacity to convey information to a human.”); Bernstein v. U.S. Dep’t of State, 922 F. Supp. 1426, 1436 (N.D. Cal. 1996) (holding that source code constitutes protectable speech for purposes of the First Amendment).

In a line of emerging jurisprudence, federal district courts across the country have likewise found that the First Amendment also protects search engine output results such as those generated by a simple Google search. See, e.g., e-ventures Worldwide, LLC v. Google, Inc., 188 F. Supp. 3d 1265, 1274 (M.D. Fla. 2016) (“The Court has little quarrel with the cases cited by Google for the proposition that search engine output results are protected by the First Amendment.”); Zhang v. Baidu.Com, Inc., 10 F. Supp. 3d 433, 438–39 (S.D.N.Y. 2014); Langdon v. Google, Inc., 474 F. Supp. 2d 622, 630 (D. Del. 2007); Kinderstart.Com, LLC v. Google, Inc., No. C06-2057JF(RS), 2007 U.S. Dist. LEXIS 22637, at *1 (N.D. Cal. Mar. 16, 2007); Search King, Inc. v. Google Tech., Inc., No. CIV-02-1457-M, 2003 U.S. Dist. LEXIS 27193, at *1 (W.D. Okla. May 27, 2003). As particularly relevant to the present analysis, the U.S. District Court for the Southern District of New York aptly noted its rationale as follows:

Nor does the fact that search-engine results may be produced algorithmically matter for the analysis. After all, the algorithms themselves were written by human beings, and they “inherently incorporate the search engine company engineers’ judgments about what material users are most likely to find responsive to their queries.” In short, one could forcefully argue that “what is true for parades and newspaper op-ed pages is at least as true for search engine output. When search engines select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression — ‘[t]he presentation of an edited compilation of speech generated by other persons.’”

Zhang v. Baidu.Com, Inc., 10 F. Supp. 3d 433, 438–39 (S.D.N.Y. 2014) (quoting Eugene Volokh & Donald M. Falk, “Google First Amendment Protection for Search Engine Search Results,” 8 J.L. Econ. & Pol’y 883, 884, 891 (2012)).

The similarity is striking. Social media algorithms are written by individuals and are designed to select and arrange posts, videos, and content in such a way as to cause content that they believe, in their judgment and based on all available data points, will best capture the user’s attention to appear first on the user’s newsfeed followed by content that they believe has a progressively lower likelihood of achieving this targeted result. Algorithms that prioritize and generate social media content differ from those generating search engine results only in the output produced—posts, videos, ads, and other content under the former and search engine results under the latter. The judgment call, the all-important ordering, and the edited compilation of speech generated—not to mention the human role at the center of it all—is akin to parades and newspaper op-ed pages. Given that the algorithms themselves embody the foregoing characteristics, courts would likely find that the computer code that makes up a social network algorithm is protected speech under the First Amendment (as well as a subject matter properly entitled to copyright protection).

    Author