chevron-down Created with Sketch Beta.


First Amendment Considerations in the Federal Regulation of Social Media Networks’ Algorithmic Speech, Part II

Veronika Balbuzanova


  • The article examines the constitutional implications of regulating social network algorithms under the First Amendment's Free Speech Clause. It suggests that such algorithms, while potentially falling under commercial speech subject to intermediate scrutiny, may lose First Amendment protection if shown to incite imminent lawless action.
  • Instances of social network algorithms influencing user behavior raise concerns about their regulation to address issues like hate speech and misinformation while ensuring constitutional rights.
First Amendment Considerations in the Federal Regulation of Social Media Networks’ Algorithmic Speech, Part II
gorodenkoff via Getty Images

This is the second part of a two-part series. Read Part I.

Finding that a social network algorithm falls within scope of the First Amendment’s Free Speech Clause is only the first part of the constitutional inquiry. A court ruling on the constitutionality of legislation regulating social network algorithms would then need to determine the type of protectable speech at issue and the corresponding level of scrutiny that applies. Because a social network algorithm of the sort at issue here “relates solely to the economic interests of the speaker and its audience,” it would likely be categorized as commercial speech subject to intermediate scrutiny. Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 561 (1980). Although the content circulated as a function of the algorithm may plausibly relate to matters of public concern, that content is not the speech being analyzed. See Zhang v. Inc., 10 F. Supp. 3d 433, 443 (S.D.N.Y. 2014) (explaining that speech with some commercial aspects is not “commercial speech” when the statements “relate to matters of public concern and do not themselves propose transactions”). In other words, a distinction must be drawn between the content comprising an algorithm, i.e., code, and the content that is disseminated when that algorithm is run, e.g., advertisements, posts, and videos. The former is the actual commercial speech being analyzed, whereas the latter is simply an output or product of the former.

Under the intermediate level of scrutiny, federal regulation would be upheld if it serves a substantial government interest, directly advances that asserted interest, and is no more extensive than necessary to serve that interest. See Central Hudson Gas & Electric Corp., 447 U.S. at 566. Before a court undertakes this analysis, however, it will first have to determine that the speech is entitled to constitutional protection, i.e., that it concerns lawful activity and is not false or misleading. Above all else, this is where social network algorithms face the greatest chance of being denied First Amendment protection.

The issue of whether social network algorithms concern lawful activity is debatable. There is no question that writing code and developing algorithms are not legally prohibited activities. One is as free to develop an algorithm as one is to write a novel. However, as with any form of speech, there are constraints on this constitutional freedom that enjoin its exercise where such exercise is directed to inciting or producing imminent lawless action. See Brandenburg v. Ohio, 395 U.S. 444, 447 (1969). It is a question of proximity and degree depending, in large part, on the circumstances in which it is done. See Schenck v. United States, 249 U.S. 47, 52 (1919). Merely advocating or teaching lawless action is differentiated from “preparing a group for violent action and steeling it to such action,” with only the latter being subject to governmental regulation. Noto v. United States, 367 U.S. 290, 297–98 (1961); see also Brandenburg, 395 U.S. at 447. Based on the extensive list of documented violence that social network algorithms have caused in cultures and countries worldwide, a court may plausibly find that the code comprising this technology, as currently written, does, indeed, prepare and steel groups to lawless action. As discussed in the first part of this series, algorithms were responsible for the widespread circulation of hate speech that rallied and caused others to commit mass violence, rape, and genocide of the Rohingya Muslims in Myanmar. Algorithms were responsible for disseminating hate speech and misinformation that directly resulted in anti-Muslim violence, mob attacks, civil unrest, and at least two deaths in Kandy, Sri Lanka. Algorithms were responsible for distributing anti-Muslim hate speech that induced communal riots in New Delhi, India.

These countless incidents of lawless action are not indirect or tangential effects of social network algorithms. Rather, they are the exact effects the algorithms are designed—and intended—to have. In 2014, Facebook learned how it could affect and alter its users’ emotional states. See Adam D. I. Kramer, Jamie E. Guillory & Jeffrey T. Hancock, “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences of the United States of America (June 17, 2014); Gregory S. McNeal, “Facebook Manipulated User News Feeds To Create Emotional Responses,” Forbes, June 28, 2014. In a massive study on many Facebook users, cited above, researchers Kramer, Guillory, and Hancock concluded that they could precisely manipulate users’ emotions by merely altering the content users were exposed to on their newsfeeds. The study explained this phenomenon—dubbed emotional contagion—as follows:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks, although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

Notably, the study expressly discussed the applicability of emotional contagion to newsfeed content generated by social network algorithms.

Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging.

That same year, the Cambridge Analytica scandal revealed that users’ physical actions, not just their emotional states, could also be readily manipulated based on the curated content shown to them. See Carole Cadwalladr & Emma Graham-Harrison, “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach,” Guardian, Mar. 17, 2018.

Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour. The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.

In other words, social media networks could predict, with a substantial degree of certainty, which content would emotionally stimulate a user to perform a specific act physically. In the case of the Cambridge Analytica scandal, the specific act or output the company was allegedly seeking to elicit was a vote for Donald Trump. “The Cambridge Analytica Story, Explained,” Wired. Whether former President Trump’s 2016 victory can actually be attributed to the company’s targeted digital marketing may perhaps never be known, but what is known is that this type of tailored content is proven to influence a user’s feelings and actions. A 2019 study conducted by the Pew Research Center found that, when directed to their ad preferences page, a majority of Facebook users, namely, 59 percent, said the categories accurately reflected their real-life interests. Thus, scientific studies and the very users themselves have confirmed the accuracy of social network algorithms. Paul Hitlin & Lee Rainie, Facebook Algorithms and Personal Data (Pew Research Ctr. Jan. 16, 2019).

Many social media networks have had no scruples with these technological tools and capabilities. For example, in 2016, Facebook publicly introduced FBLearner Flow, a prediction engine powered by artificial intelligence and designed to deliver “a more personalized experience for people using Facebook.” Jeffrey Dunn, “Introducing FBLearner Flow: Facebook’s AI backbone,” Facebook: Engineering, May 9, 2016. In a 2016 post, Facebook touted that “[O]ur prediction service has grown to make more than 6 million predictions per second.” Id. Confidential documents leaked two years later revealed that Facebook launched a new advertising service that offered advertisers the ability to target users based on predictions generated by FBLearner Flow of users’ future behavior. By feeding in the collection of personal data points that Facebook has on each user—which sources indicate may range anywhere from 98 to 52,000 data points and may include a user’s location, device information, Wi-Fi network details, video usage, affinities, and details of friendships—FBLearner Flow predicts how the user will behave in the future, e.g., what the user will purchase. Sam Biddle, “Facebook Uses Artificial Intelligence To Predict Your Future Actions For Advertisers, Says Confidential Document,” Intercept, Apr. 13, 2018; Caitlin Dewey, “98 personal data points that Facebook uses to target ads to you,” Wash. Post, Aug. 19, 2016; Adam Green, “Facebook’s 52,000 data points on each person reveal something shocking about its future,” Kim Komando, Sept. 17, 2018.

One slide in the document touts Facebook’s ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven’t even made yet. This would, potentially, give third parties the opportunity to alter a consumer’s anticipated course. Here, Facebook explains how it can comb through its entire user base of over 2 billion individuals and produce millions of people who are “at risk” of jumping ship from one brand to a competitor. These individuals could then be targeted aggressively with advertising that could pre-empt and change their decision entirely—something Facebook calls “improved marketing efficiency.”

Biddle, supra.

Is First Amendment Protection Applicable?

What implications does this have for the categorization of social media networks’ algorithmic speech? In short, it potentially strips such speech of the heightened First Amendment protection it would enjoy were it to be classified as commercial speech. To qualify as commercial speech, a social network algorithm must concern lawful activity, which the direct incitement of imminent lawless action can never be. If a social media network knows how to influence any user’s behavior effectively—indeed, if it has sponsored scientific studies and conducted countless experiments and real-life case studies to confirm this knowledge definitively—and a clear correlation can be demonstrated between the content generated and shown to that user and the user’s resulting lawless conduct. It follows that the content was very much directed to inciting the imminent lawless action that followed. The social media network knows that such lawless action will imminently follow and shows the content to the user in spite of this knowledge. Indeed, Justice Thomas’s concurrence in the recent case of Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 208 L. Ed. 2d 197 (Oct. 13, 2020), suggests that the scope of the immunity provision in section 230(c)(1) of the Communications Decency Act (CDA) was never meant to insulate interactive computer services like Facebook from liability where they distribute or circulate content that they know is illegal, e.g., where a user has reported or flagged the content as defamatory. Justice Thomas’s concurrence effectively implies that the current judicial interpretation of the CDA’s immunity provision is misplaced and that this legal issue is ripe for adjudication by the Supreme Court.

Each post that any given user encounters on his or her newsfeed is specifically chosen for that specific user. One’s newsfeed consists of meticulously curated content based on hundreds, if not thousands, of data points that the social network algorithm has been given relating to that specific user. Based on these data points, a prioritized list of content uniquely tailored to that user is generated. If it must be anything at all, newsfeed content is pointedly directed at a user. That is, after all, the very appeal of the newsfeed feature itself.

This practice’s egregiousness is particularly acute where such content is shown to users whom the algorithms have identified or flagged as current or potential sympathizers of the violent content or hate speech being circulated. In what happens to be a striking comparison, Justice Holmes aptly articulated the reason for this egregiousness in Frohwerk v. United States:

[I]t is impossible to say that it might not have been found that the circulation of the paper was in quarters where a little breath would be enough to kindle a flame and that the fact was known and relied upon by those who sent the paper out. Small compensation would not exonerate the defendant if it were found that he expected the result, even if pay were his chief desire.

249 U.S. 204, 209 (1919).

In light of the foregoing, social media networks’ algorithmic speech would likely be subject to a rational basis review that nearly any legislation would withstand given the significant public policy considerations at play, e.g., data privacy, consumer protection, and public safety. The specific language of such legislation would heavily depend on the nature of the algorithmic technologies being used in the industry. For example, legislators must take caution to avoid the pitfall of drafting overly broad or all-encompassing language that would unwittingly restrict or prohibit the use of algorithms that do not tend to incite lawlessness. This goal may be more easily accomplished given recent technological developments such as Facebook’s 2019 announcement that it removed more than seven million instances of hate speech because artificial intelligence—rather than human moderators—was now detecting hate speech on the platform. Billy Perrigo, “Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch,” Time, Nov. 27, 2019. Despite drawbacks with this seemingly remedial measure, this is nevertheless a promising start that may help lawmakers better tailor applicable legislative language to curb the violent real-world consequences brought about by social network algorithms.