Facebook has been wildly successful in bringing people together. Groups, pages, profiles, and private chats have enabled users to connect with everything and everyone—from a favorite celebrity or preferred brand to a social cause or long-lost friend. However, this roaring success has also been at the root of the company’s harshest criticism in its more than decade-long history. The proprietary algorithms that lie at the heart of the social media conglomerate have come under uniquely intense scrutiny for their role—which they continue to play—in amplifying divisiveness and inciting off-line violence across the globe. In much the same way that Facebook’s algorithms have fueled historic social movements such as the #MeToo and Black Lives Matter movements by connecting and bringing supporters of these social causes together, it has simultaneously enabled extremist views advocating violence, racism, terrorism, and even genocide to spread like wildfire through the same methodology.
These issues have translated into increasingly more pronounced calls for change in the industry. On a macro level, reforms are demanded both internally—whereby social media companies are pressured to strengthen their internal governance and user policies (and strictly enforce them)—and externally—whereby policymakers are expected to propose and pass meaningful legislation that adequately addresses and prevents these real-world consequences. Few articles, however, address the legal parameters of the latter option.
This two-part series does not purport to set out what the content of such proposed legislation should be, but rather what First Amendment free speech considerations must be taken into account when drafting such legislation. Suppose federal rules and regulations are to be imposed on the nature and type of algorithms that social media companies can use. In that case, there is real value in proactively analyzing the probability that such legislation would withstand a legal and, more specifically, a constitutional challenge. Part I of this series analyzes whether social network algorithms constitute protectable free speech. Part II analyzes what level of constitutional scrutiny laws that regulate such algorithms would likely be subject to (and whether they would pass constitutional muster).