February 08, 2021 Articles

First Amendment Considerations in the Federal Regulation of Social Media Networks’ Algorithmic Speech, Part II

Social media networks’ algorithmic speech would likely be subject to a rational basis review that nearly any legislation would withstand given the significant public policy considerations at play.

By Veronika Balbuzanova

This is the second part of a two-part series. Read Part I.

Finding that a social network algorithm falls within scope of the First Amendment’s Free Speech Clause is only the first part of the constitutional inquiry. A court ruling on the constitutionality of legislation regulating social network algorithms would then need to determine the type of protectable speech at issue and the corresponding level of scrutiny that applies. Because a social network algorithm of the sort at issue here “relates solely to the economic interests of the speaker and its audience,” it would likely be categorized as commercial speech subject to intermediate scrutiny. Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 561 (1980). Although the content circulated as a function of the algorithm may plausibly relate to matters of public concern, that content is not the speech being analyzed. See Zhang v. Baidu.com Inc., 10 F. Supp. 3d 433, 443 (S.D.N.Y. 2014) (explaining that speech with some commercial aspects is not “commercial speech” when the statements “relate to matters of public concern and do not themselves propose transactions”). In other words, a distinction must be drawn between the content comprising an algorithm, i.e., code, and the content that is disseminated when that algorithm is run, e.g., advertisements, posts, and videos. The former is the actual commercial speech being analyzed, whereas the latter is simply an output or product of the former.

Under the intermediate level of scrutiny, federal regulation would be upheld if it serves a substantial government interest, directly advances that asserted interest, and is no more extensive than necessary to serve that interest. See Central Hudson Gas & Electric Corp., 447 U.S. at 566. Before a court undertakes this analysis, however, it will first have to determine that the speech is entitled to constitutional protection, i.e., that it concerns lawful activity and is not false or misleading. Above all else, this is where social network algorithms face the greatest chance of being denied First Amendment protection.

The issue of whether social network algorithms concern lawful activity is debatable. There is no question that writing code and developing algorithms are not legally prohibited activities. One is as free to develop an algorithm as one is to write a novel. However, as with any form of speech, there are constraints on this constitutional freedom that enjoin its exercise where such exercise is directed to inciting or producing imminent lawless action. See Brandenburg v. Ohio, 395 U.S. 444, 447 (1969). It is a question of proximity and degree depending, in large part, on the circumstances in which it is done. See Schenck v. United States, 249 U.S. 47, 52 (1919). Merely advocating or teaching lawless action is differentiated from “preparing a group for violent action and steeling it to such action,” with only the latter being subject to governmental regulation. Noto v. United States, 367 U.S. 290, 297–98 (1961); see also Brandenburg, 395 U.S. at 447. Based on the extensive list of documented violence that social network algorithms have caused in cultures and countries worldwide, a court may plausibly find that the code comprising this technology, as currently written, does, indeed, prepare and steel groups to lawless action. As discussed in the first part of this series, algorithms were responsible for the widespread circulation of hate speech that rallied and caused others to commit mass violence, rape, and genocide of the Rohingya Muslims in Myanmar. Algorithms were responsible for disseminating hate speech and misinformation that directly resulted in anti-Muslim violence, mob attacks, civil unrest, and at least two deaths in Kandy, Sri Lanka. Algorithms were responsible for distributing anti-Muslim hate speech that induced communal riots in New Delhi, India.

Premium Content For:
  • Litigation Section
Join - Now