As lawyers, we are the menders reinforcing and fixing the fabric of community in this country. We exist because, while humans are social animals, living in a social community is a complex endeavor. So, what happens when you try to create a new community? When you start a new country, you (hopefully) gather the lawyers and write a constitution. When you form a new county or a municipality, you set up a court system to manage disputes in your new polity. But what happens when the new community is created not to offer services to its members but to profit off them? What happens when managing disputes hurts the bottom line, and unmitigated, unhinged divisiveness makes profits soar? The answer is playing out before us on Facebook, X (formerly Twitter), Instagram, TikTok, and all the other large social media platforms.
The Law Is Simple
The First Amendment prohibits government encroachment on the rights of citizens to free speech. It has essentially no bearing on private companies such as Meta (parent company of Facebook and Instagram) or X Corp. Those companies can delete whatever posts they want and ban whatever users they want. The main limitation placed on these organizations is that there are times when they are required to remove content, primarily when the property rights of others are involved. As the Internet was developing the concept of platforms (think YouTube, where the company doesn’t actually create or even upload and control any of the content it offers), an enormous opportunity was created for the theft and distribution of copyrighted materials. Enter the Digital Millennium Copyright Act of 1998 (DMCA). The DMCA and specifically 17 U.S.C. § 201 established a framework in which platforms were not required to police all potential content on their website so long as they took appropriate action (including removing content and banning users) when copyright holders informed them of illegal material on their website.
Without the DMCA, social media would never have existed. The business model depends on giving users privacy in their posts and having vast amounts of people sharing content in exorbitant volumes that could not be monitored by the lean companies that created these platforms. And so, new communities were born in ethereal space where anyone could talk to anyone, and no one was watching. The problems were obvious, and they came immediately.
Bigotry, Sex, and Lies
Social media has done a lot of good, but from the start, it has also been a hostile place for women and people of color who face a magnified, community-empowered, and anonymized version of the hatred and violence they experience in their daily lives. In addition to that, misinformation spreads like wildfire everywhere you look, and the platforms that have tried to ban or limit sexual conduct find themselves in a never-ending battle to maintain that policy. It’s what the Internet has always been, but now with more people and easier access.
Social media companies have, from the start, been fighting a battle over content moderation on two fronts: the technical and the social. On the technical front, moderating content without large amounts of human labor is difficult. Artificial intelligence, even in its current form, struggles to understand context and subtlety. What is a threat or a joke between friends? What is art, and what is pornography? While technology is getting better, it is undeniably incapable of the task of true content moderation and won’t be for quite some time. Furthermore, when disagreements arise, resolving disputes requires reviewing new data and analyzing the original post in light of that new data—not a task that technology is capable of at this time.
On the social front, building the systems while actively trying to use them has led (and continues to lead) to inconsistent reporting, enforcement, and review of harmful posts. Because technology is not capable of independently moderating content, the system these platforms set up (modeled after the DMCA) largely depends on reporting by users. Unsurprisingly, users often disagree on what is problematic, harassing, or factual. To moderate such disputes, social media platforms must regularly take stances on controversial issues. Even in the simplest form, they have to decide between two active users who are upset. For many years, these moderation problems existed but were, for most people, a marginal issue. And then, the election came.