chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo September/October 2024: Election Law

Free Speech and Free Press in the Age of Disinformation

Robert Joseph Desmond

Summary

  • While we celebrate the First Amendment’s protection of freedom of thought, expression, association, and the press, we continue to elect legislatures that pass laws that drastically and unconstitutionally limit these rights.
  • It is no secret that false information is often included in print, online, broadcast, and cable news, but this does not justify governmental restrictions on a free press.
  • Social media platforms such as Facebook and Reddit should strive to employ only the least restrictive means of regulating protected speech.
Free Speech and Free Press in the Age of Disinformation
assalve via Getty Images

Jump to:

The First Amendment’s protections of religion, speech, the press, assembly, and petition are not only essential to the pursuit of happiness by each person, individually, but also to the proper functioning of a republic governed by the people as a whole.

Admittedly, speech can have negative consequences. The following is a non-exhaustive list of concerns commonly raised by lobbyists and legislatures when attempting to restrict speech:

  • The unintentional spread of false information, also known as “misinformation.”
  • The intentional use of false information to mislead, also known as “disinformation.”
  • The intentional weaponization of false information to cause harm, known as “defamation.”
  • The unintentional disclosure of private information, such as data leaks.
  • The intentional and unauthorized access of private information, such as data breaches and intrusion upon seclusion.
  • The intentional disclosure of private information to cause harm, such as nonconsensual dissemination of intimate images and public disclosure of private facts.
  • The intentional and unauthorized use of name, image, and likeness rights and intellectual property.
  • The use of speech that is integral to illegal activities such as extortion, conspiracy, and solicitation.
  • The recordation of illegal activities such as child sexual abuse materials.
  • The disclosure of government secrets.
  • The incitement of or threatening to engage in imminent violence, such as urging a mob to attack a nearby building or threatening the life of an elected official.
  • The intentional obstruction of government actions such as filing false elector certifications.

Undoubtedly, in some instances, the government may have a compelling interest to use the least restrictive means necessary to prevent the harms of certain historically unprotected areas of speech. In other instances, the government should more finely distinguish between thought, expression, and action to carve out suitable space where only specific criminal activities are prosecuted, while the mere exercise of religion and speech rights is left uninhibited. It is the government’s constitutional obligation to ensure that its means are sufficiently narrowed, its interests sufficiently compelling, and its restraints sufficiently distinguishable and clear.

As citizens of this great nation, it is our civic duty to take whatever additional measures may be necessary to further reduce the adverse influences of false information and other forms of harmful speech on our own lives rather than relying on the government to constrain protected speech that we disagree with or otherwise disfavor. It is high time that we return to the principled doctrine of counter-speech and resume curtailing misinformation, disinformation, and other harmful speech by speaking our piece rather than by passing laws that require others to hold their peace. Speech should never be compelled or confined if further discourse would expose falsehoods or otherwise remedy the harm caused by that speech.

The Spread of False Information by Traditional Media

It is no secret that false information is often included in print, online, broadcast, and cable news—whether by accident, mistake, misunderstanding, negligence, recklessness, or intention.

In an ideal world, the news would be completely objective, and there would be a clear delineation between reporters, commentators, and entertainers. However, people are deeply flawed and inherently imperfect. Even the most skilled reporters inadvertently omit crucial details that provide necessary context or color their stories with a shade of subjectivity that leads to false or misleading representations. Further, both capitalism and politicking incentivize the blurring of news, opinion, and entertainment and cause consumers to misinterpret the speaker’s objectives. Nevertheless, these justifications are insufficient to support governmental restrictions on a free press.

Proponents of statutory limitations on the press believe the general public often lacks the knowledge, skill, and resources necessary to effectively shift through the noise and confirm the veracity of reporting. Opponents of such limitations, on the other hand, appropriately acknowledge that transparent public discourse in a free marketplace of ideas is the best-known method of uncovering the truth. While a better method may exist, it has not yet been discovered. Of course, fairness is not innate in the free marketplace of ideas, and the government has a compelling interest in discouraging the injustices caused by defamation.

Defamation law currently provides a reliable (though not always satisfactory) check when reporting includes misinformation or disinformation. For example, defamation claims have been successfully brought in recent years against (1) InfoWars host Alex Jones for his lies about the Sandy Hook shootings; (2) Fox News for the lies of its anchors, reporters, and pundits about the reliability of certain voting machines in the 2020 election; and (3) CNN for its portrayal of high school student Nicholas Sandmann engaged in a March for Life rally as a racist.

Proving the elements of a defamation claim for disinformation is fairly straightforward because the false information is published with the intention to mislead. A defamation claim for misinformation is somewhat harder to prove because the false information is spread unintentionally, but the elements may still be proven by showing a lesser mens rea. Most plaintiffs must show that the speaker was at least negligent as to the falsity of the statement, but public figures must show that the speaker had actual malice—that is, knowledge of or reckless disregard for the falsity of the statement.

Indeed, defamation law as a deterrent and enforcement mechanism against misinformation and disinformation has its flaws. For example, both InfoWars and Fox News were able to repeat their lies for years, and Alex Jones has gone to significant lengths to evade paying damages to his victims. CNN settled the Sandmann defamation case one year after it was filed and did not engage in repeated publication of the allegedly defamatory statements. Even after hefty judgments against or settlements by these defendants, millions of Americans still fundamentally believe the false information at the core of those cases. Plus, defamation law cannot be used to prevent the publication of defamatory materials, as such an order would be an unconstitutional prior restraint. Thus, it is the responsibility of other members of the press to correct the record, and the responsibility of consumers to consult multiple sources.

Exercising Free Speech Rights on Social Media

Social media is not a haven for free speech. While the First Amendment prohibits the government from restricting speech, social media platforms can freely choose (1) what speech to solicit or ban, (2) what posts to promote or demote, and (3) whether to allow or deactivate comments on, monetization of, and other engagements with those posts. More broadly, platforms are given free rein to determine whether certain users should be (1) included in lists of suggested accounts, (2) given access to the platform without special promotion, or (3) deplatformed entirely.

Many prominent influencers have criticized various platforms for making purportedly discriminatory moderation decisions based on political viewpoint and for being notoriously opaque in their decision-making and appeals processes. Advocates of government regulations for social media platforms often support plans that require (1) political neutrality by moderators, (2) conspicuous disclosure of the platform’s content policies, and (3) a fair and transparent appeals process. For example, the Supreme Court recently remanded Moody v. NetChoice and NetChoice v. Paxton, 603 U.S. ___ (2024), because the lower courts failed to properly analyze the First Amendment concerns with similar laws in Florida and Texas.

Some skeptics of such regulations argue that platforms should be free to cater to certain political ideologies, as any such regulation would amount to unconstitutionally compelled speech. These skeptics often proclaim that the Internet bubbles that develop on these platforms are safe spaces whereby reasonable people can freely associate to debate their beliefs in good faith while excluding those who vehemently disagree with their claims and engage in name-calling and bullying but refuse to articulate an argument on the merits. Other skeptics propose that increased transparency in the enforcement of content policies provides bad faith actors with the tools necessary to violate the spirit but not the letter of the platform’s rules.

The mere fact that the censor is a corporate entity rather than a governmental body does not eliminate the deleterious effects of the censorship on the marketplace of ideas, but private censorship is less impactful than government censorship because it narrowly applies to a specific platform rather than to the overall marketplace. Even if censorship is widespread across many platforms, speakers can easily make their speech heard on less restrictive platforms or, if they have the resources, start their own platforms, as Donald Trump did. Unlike the finite public airwaves, which operate under an equal-time rule for political advertising on the lucky few with radio or television licenses, the Internet is not so limited a resource that the imposition of an equal access rule is necessary on social media. Less restrictive means are readily available to further the government’s interests in this setting.

While not constitutionally required to do so, general-purpose platforms such as Facebook and Reddit should strive to employ only the least restrictive means of regulating protected speech because the marketplace of ideas is greatly hindered when such platforms broadly restrict large categories of protected speech, such as hate speech or sexually oriented speech. For example, many general-purpose platforms allow users to create their own forums, such as Groups and Subreddits, and to freely control the admission of members and the rules of posting to that specific forum. User-moderated forums allow users to have valuable conversations about tough topics such as race, gender, and religion, to express taboo thoughts, and to explore their sexual desires and sexuality without pushing that speech into the feeds of users who may be harmed by or are otherwise uninterested in that speech. However, frequenting such forums may drive the user into an information silo that leaves a false impression that the range of opinions held on any given topic is limited to those expressed by the members of the forum. If left unchecked, these forums may also morph from a place for discussing radical ideas into a place for planning illegal activities. Conversely, defenders argue that these forums act as safety valves that permit users to engage in hate speech so that they do not act out violently. To safeguard against these risks, platforms must take an active role in enforcing their broader content policies in these forums while leaving the enforcement of forum-specific rules to the users who manage those forums.

Even as many platforms aggressively moderate user-generated content to limit the undesirable impacts of free speech, many naysayers assert that current levels and methods of moderation are insufficient because the platforms struggle to prevent users from sharing pirated content, nonconsensual intimate images, child sexual abuse materials, government secrets, leaked and hacked private information, and other harmful speech. Recent advances in artificial intelligence are preventing the publication of some types of harmful speech, but this software often makes mistakes and over-moderates protected speech. Many platforms are also experimenting with new means of diminishing the destruction caused by harmful speech. For example, the Community Notes feature on X allows users to fact-check or otherwise add context to another user’s tweet.

In other instances, the blame falls squarely on the speech or activities of the platform itself. For example, recent lawsuits allege that algorithms used by Meta, Google, and TikTok result in radicalization and influence users to join terrorist causes or take up eating disorders. Restrictions on platform speech run many of the same risks previously discussed in this article.

The Special Case of Sexually Oriented Speech

In recent years, there has been a resurgence of legislative constraints and coordinated political attacks on websites that allow users to post sexually oriented speech. Often, these laws and movements are led by conservative Christian groups and right-leaning members of the traditional media who frame their motive as a desire to end human trafficking. While their stated ends are undoubtedly noble, their means typically result in overbroad and vague laws and corporate policies that are hard to enforce, are ineffective at achieving their goals, and violate free speech ideals.

Most notably, the passage of FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act) and SESTA (Stop Enabling Sex Traffickers Act) in 2018 was intended as a path to prosecuting operators of websites where sex trafficking occurs. The law’s existence has had a massive chilling effect on protected speech. Many of the world’s most popular sites, such as Tumblr and Craigslist, chose to preemptively ban sexually oriented speech rather than risk criminal prosecution and civil liability, and numerous forums dedicated to discussing sexuality and gender issues shuttered.

In 2021, payment processors began requiring website operators to execute strict age, identity, and consent verification before allowing a user to post sexually oriented speech. This form of private censorship is more troublesome than platform-specific censorship, as it broadly applies to all platforms that accept credit card payments. Rather than submit to these new policies, a few adult platforms have chosen to transact solely in cryptocurrency.

More recently, nearly half of the states across America have passed laws that require website operators to verify the age of all users before granting access to sexually oriented speech. The Supreme Court is expected to review the constitutionality of Texas’s age verification law in fall 2024.

While these laws and policies may seem commonsense on the surface, civil rights organizations have been quick to point out that these rules greatly confine anonymous speech, subject users to having their personal information (including intimate details about their gender identity, sexual orientation, and romantic history) hacked, and potentially give the government and other nefarious actors a way to track the public’s sexual activities.

Conclusion

Politics are inarguably creating a schism across the country, and our ideological differences are undisputedly magnified by the traditional media we consume and the social media platforms we visit. Nonetheless, Americans soundly agree that a thriving democracy requires freedom of thought, freedom of expression, freedom of association, and freedom of the press. While we celebrate our founding fathers for enshrining protections of these rights in the First Amendment, we also continue to elect legislatures that pass laws that drastically and unconstitutionally limit these rights in a variety of ways and for a plethora of reasons. Instead, we must challenge such laws, encourage a free and fair marketplace of ideas, and make adjustments in our personal lives to account for changes in media and technology.

    Author