chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo September/October 2023: Protest or Riot: An Overview of Accountability

Who Watches the Watchmen? Content Moderation in Social Media

Jordan Lee Couch

Summary

  • Social media companies have faced tremendous turmoil in recent years regarding who is allowed to post what and where.
  • The First Amendment has essentially no bearing on private companies such as Meta (parent company of Facebook and Instagram) or X Corp.
  • While technology is improving, it is incapable of the task of true content moderation and won’t be for quite some time.
  • Platforms have set themselves up in a way guaranteeing that will not ever meaningfully moderate content. People who should be banned won’t be, and people who have done nothing wrong will be banned.
Who Watches the Watchmen? Content Moderation in Social Media
Xavier Lorenzo via Getty Images

Jump to:

As lawyers, we are the menders reinforcing and fixing the fabric of community in this country. We exist because, while humans are social animals, living in a social community is a complex endeavor. So, what happens when you try to create a new community? When you start a new country, you (hopefully) gather the lawyers and write a constitution. When you form a new county or a municipality, you set up a court system to manage disputes in your new polity. But what happens when the new community is created not to offer services to its members but to profit off them? What happens when managing disputes hurts the bottom line, and unmitigated, unhinged divisiveness makes profits soar? The answer is playing out before us on Facebook, X (formerly Twitter), Instagram, TikTok, and all the other large social media platforms.

The Law Is Simple

The First Amendment prohibits government encroachment on the rights of citizens to free speech. It has essentially no bearing on private companies such as Meta (parent company of Facebook and Instagram) or X Corp. Those companies can delete whatever posts they want and ban whatever users they want. The main limitation placed on these organizations is that there are times when they are required to remove content, primarily when the property rights of others are involved. As the Internet was developing the concept of platforms (think YouTube, where the company doesn’t actually create or even upload and control any of the content it offers), an enormous opportunity was created for the theft and distribution of copyrighted materials. Enter the Digital Millennium Copyright Act of 1998 (DMCA). The DMCA and specifically 17 U.S.C. § 201 established a framework in which platforms were not required to police all potential content on their website so long as they took appropriate action (including removing content and banning users) when copyright holders informed them of illegal material on their website.

Without the DMCA, social media would never have existed. The business model depends on giving users privacy in their posts and having vast amounts of people sharing content in exorbitant volumes that could not be monitored by the lean companies that created these platforms. And so, new communities were born in ethereal space where anyone could talk to anyone, and no one was watching. The problems were obvious, and they came immediately.

Bigotry, Sex, and Lies

Social media has done a lot of good, but from the start, it has also been a hostile place for women and people of color who face a magnified, community-empowered, and anonymized version of the hatred and violence they experience in their daily lives. In addition to that, misinformation spreads like wildfire everywhere you look, and the platforms that have tried to ban or limit sexual conduct find themselves in a never-ending battle to maintain that policy. It’s what the Internet has always been, but now with more people and easier access.

Social media companies have, from the start, been fighting a battle over content moderation on two fronts: the technical and the social. On the technical front, moderating content without large amounts of human labor is difficult. Artificial intelligence, even in its current form, struggles to understand context and subtlety. What is a threat or a joke between friends? What is art, and what is pornography? While technology is getting better, it is undeniably incapable of the task of true content moderation and won’t be for quite some time. Furthermore, when disagreements arise, resolving disputes requires reviewing new data and analyzing the original post in light of that new data—not a task that technology is capable of at this time.

On the social front, building the systems while actively trying to use them has led (and continues to lead) to inconsistent reporting, enforcement, and review of harmful posts. Because technology is not capable of independently moderating content, the system these platforms set up (modeled after the DMCA) largely depends on reporting by users. Unsurprisingly, users often disagree on what is problematic, harassing, or factual. To moderate such disputes, social media platforms must regularly take stances on controversial issues. Even in the simplest form, they have to decide between two active users who are upset. For many years, these moderation problems existed but were, for most people, a marginal issue. And then, the election came.

Banning Goes Mainstream

What do you do when a foreign country uses your platform to influence a U.S. election? How about when white supremacists, emboldened by the political environment, go from the fringes to the mainstream? And what option do you have when the president of the United States blatantly violates the anti-harassment rules of your platform? Starting in 2015, social media faced a whole new landscape in the world of content moderation. Russia was exploiting the lack of moderation to spread misinformation and sew dissent among U.S. citizens, and the platforms discovered that they were wildly unprepared to respond to the issue. As white supremacists also grew emboldened on the platforms, deleting posts and banning people became popular responses. To address misinformation, platforms started removing posts or adding notes to posts informing users that the post was inaccurate and linking to fact-checking websites. Each of these moderation tactics quickly resulted in angry protests by users.

What made it even more complex was that Donald Trump, who had been incredibly effective at using social media in his campaign for president, was one of the greatest violators of the platform rules, repeatedly spreading misinformation and using harassing and offensive language. Twitter and Facebook had to decide what to do when a user who, under any other circumstances, would be banned from the platforms was using the official account of the president of the United States. For a long time, they did nothing. Then January 6 came. Domestic terrorists stormed the U.S. Capitol in an attempt to overturn the presidential election, guided and encouraged by Donald Trump. In the aftermath, Facebook temporarily banned Donald Trump from its platform, and Twitter “permanently” banned him from its platform. This was a historic move, but even so, many felt it was too little, too late.

The Dust Settles on a Different World

Social media has faced tremendous turmoil in recent years over problems with privacy, election interference, national security, and harm to users (especially children). At the heart of much of that turmoil is content moderation—who is allowed to post what and where. As a result, the landscape has changed. Facebook created a new, semi-independent Oversight Board to handle all disputes over content moderation in a system modeled after courts. While the change has garnered a lot of media attention, it’s unclear so far what impact it has had. But perhaps the most substantial change to the social media landscape was Elon Musk’s clumsy takeover of Twitter. Initially campaigning on the idea of removing bots and improving content moderation, Musk has actually focused on removing bans on white supremacists and other right-wing voices and has invited Donald Trump back to the platform.

All this turmoil has led to an explosion of new platforms. Truth Social took on the mantle of the leading right-wing platform. Mastodon, Bluesky, and countless others tried to move into Twitter’s space. Meta had the biggest launch ever with its Threads platform but faded from the conversation almost as quickly.

So, has anything really changed in the world of content moderation? Honestly, not really. The business model of these companies and the high-stakes gambling of venture capitalists that boost them make it impossible for these companies to meaningfully moderate content. These companies were set up to have exorbitant growth in profits by keeping staff costs as low as possible, thereby achieving incredible economies of scale. The problem with this is that moderation of disputes requires people, and people are expensive. For example, the United States has about 1.3 million lawyers like us whose job is to help in the moderation of disputes. Facebook has more than 2 billion daily active users, a larger population than India and the United States combined. If Facebook employed the same ratio of content moderators that the United States has lawyers, Facebook would need 7.7 million content moderators. Facebook has a total of fewer than 90,000 employees.

Platforms have set themselves up in a way guaranteeing that they cannot and will not ever meaningfully moderate content. People who should be banned won’t be, and people who have done nothing wrong will be banned. As with all problems of society, those who will suffer most will be those who are already socioeconomically disadvantaged. If we want better, we must hold not just the companies accountable but also the financial systems that hang them with a golden noose and the political leaders who refuse to educate themselves sufficiently to regulate either. We can and I believe we will do better. There’s a story I believe is from an old Persian fable, but I must confess my familiarity with it comes primarily from the author Kurt Vonnegut Jr. A king asks his wise advisor for something he can say to his people no matter what, a statement that will offer guidance in disaster and prosperity. The advisor replied, “and this too shall pass.”

    Author