chevron-down Created with Sketch Beta.

Infrastructure Magazine

Magazine Archives

Section 230: Twenty-Six Words that Created Controversy

Joseph E Cosgrove Jr

Summary

  • Various courts have reached differing opinions and interpretations of section 230.
  • It’s time to review section 230 given the amount of discussion in the media and Washington D.C. and state capitols.
  • The Congressional Research Service issued the Socia Media: Misinformation and Content Moderation Issues for Congress report, concluding that if Congress decides to address the issue of misinformation or moderation, it might consider factors such as the scope of proposed actions.
  • Section 230 controversies will continue to thrive at the federal and state levels and in legislative, judicial, and political forums.
Section 230: Twenty-Six Words that Created Controversy
simonkr via Getty Images

Jump to:

We’re on the cusp of something exhilarating and terrifying. . . . What the internet is going to do to society, both good and bad, is unimaginable.

—David Bowie (1999)

One of the hottest topics in the internet and telecommunication regulation space is an originally rather obscure provision of the Telecommunications Act of 1996: 47 U.S.C. § 230. The provision, commonly referred to today as section 230, did not attract much attention in 1996. But this attention deficit changed dramatically by 2021. This regulatory statute is the topic of frequent intense debate and discussion in various forums and by numerous politicians and pundits. Some argue that this provision is responsible for propelling the development of the internet. Others argue that section 230 has served to nurture the growth of online superpowers that control the flow of the content of speech in the public square. The spillover effect of this controversy has increased the strict scrutiny that is being placed on “Big Tech” and led to numerous trips to Capitol Hill for Big Tech’s CEOs.

So, what is section 230? What is the issue with section 230 a quarter of a century after its enactment? What is the status of section 230? Where might this controversy end up? Let’s dive in.

Setting the Stage with the Telephone Platform

Shortly after Alexander Graham Bell’s first call to Watson in 1876, subscribers likely began using their new telephones to harass others and commit crimes and torts. No specific evidence as to when such nefarious practices started is offered here. But there is a presumption based upon our flawed human nature that callers quickly tumbled to the idea of using the telephone as a means of threat, extortion, theft, gossip, and harassment.

In any event, the early telephone companies were treated as “common carriers.” Thus, as Professors Stuart Minor Benjamin and James B. Speta explain, telephone companies have been exempt from liability (for, e.g., defamation) for their customers’ miscreant deeds. The basic idea was that the telephone companies did not control or monitor the customers’ content. This fact has traditionally distinguished telephone companies from newspapers or television broadcasters, which have been treated as “speakers or publishers” due to editorial controlover what appears in their type of media/platform. As Tarleton Gillespie observed, telephone companies traditionally have been “trusted interpersonal information conduits,” as the service is the commodity, not the information it conveys. This contrasts with media content producers such as television and newspapers, where the entertainment is the commodity and we expect some content moderation.

Social media platforms, the focus of this article, are perhaps a new category, “a hybrid between mere information conduits and media content providers.” Some argue that these social media platforms (a product of technological convergence) are “enjoying the privileges of common carriers without the responsibilities” such as the obligation to serve all users in a nondiscriminatory manner.

A tour of some of the more interesting section 230 cases may help flesh out this topic.

The “Wolf of Wall Street” Gives Birth to Section 230

Fast-forward a century or so. In 1996 the internet platform was beginning to take shape, and its growth coincided with the first major rewrite of telecommunications law since 1934. The federal Telecommunications Act of 1996 (FTA 96), described as “revolutionary legislation” by President Bill Clinton, was primarily focused on three big themes: facilitate local exchange competition, increase competition in the long-distance telephony market, and reform the century-old policy of universal service. But as Professor Jeff Kosseff explained in his must-read “biographical” book on section 230, this under-the-radar provision worked its way into the FTA 96. Section 230 flew under the banner of the Communications Decency Act, which was added to Title V of the FTA 96. Today, “Section 230” now has its own Wikipedia page!

As is often the case with the enactment of legislation, a bill idea is a by-product of catching up to prior, real-life events. In this case, the firm of Stratton Oakmont (yes, that Stratton Oakmont) had a legal battle with Prodigy (now like Blockbuster and Radio Shack in our memories). The firm sued Prodigy over content that it deemed defamatory on the latter’s online “bulletin boards.” The posts of one Prodigy user described the head of Stratton as a “criminal” and the company as a “fraud,” among other such invectives. The court in Stratton Oakmont, Inc. v. Prodigy Services Co. eventually held Prodigy to the “strict liability” standard of a publisher of defamatory statements because it had actively advertised its practice of controlling content and screening/editing messages posted on its bulletin boards.

Congress (at least those members who were aware of the implications of section 230) swooped in within a year of the decision and decided to provide statutory “immunity” (albeit this term is not used in section 230) to “interactive computer services” with millions of users from tort-based lawsuits. Such burdensome litigation posed an imminent and substantial threat to the relatively new internet platform and its providers. It must be noted that this law was passed before there was a Facebook or Twitter or most social media platforms that currently occupy large parts of our daily lives. The related policy position was to encourage such providers to self-regulate the dissemination of offensive material on the internet and not be subject to liability as a “publisher” in exercising these “editorial” functions. In short, as Milton Mueller posited, section 230 was intended both to immunize providers that did nothing to restrict users’ communications and to immunize providers that took efforts to discourage or restrict undesirable content.

So, What Are the 26 Words?

Section 230 has far more than 26 words, but this article focuses on the 26 words that constitute the key “publisher or speaker” provisions in 47 U.S.C. § 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The companion provision is 47 U.S.C. § 230(c)(2), which provides:

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Taken together, these two subsections are known as the “Good Samaritan” section relating to the blocking and screening of offensive material. Digesting its intended effect, the provider is not liable for bad information posted by another and the provider is also not liable if it either moderates or edits content or decides not to do so (but it is not required to do so). The underlying ideas were to encourage the development of the internet and limit government interference in this new platform/medium to a minimum. While wanting to protect children from indecent material, Congress also envisioned encouraging the exchange of “intellectual activity” and promoting commerce via the internet.

As with other best of intentions at the time, we’ll turn to how these objectives turned out in the past quarter century and how courts applied or misapplied (depending upon your viewpoint) this “simple” little addition to the FTA 96. We’ll then highlight the current debate over section 230 and discuss what, if anything, may be done about it in Washington.

Just How Far Does Section 230 Go?

With section 230 now “on the books” (at least in the United States), the next phase was the interpretation and application of the law by various courts. There have been numerous decisions with a variety of opinions.

Zeran v. AOL

One such early case was Zeran v. AOL,which had an odd connection to the tragic Oklahoma City bombing of 1995. An unidentified person posted advertisements on AOL’s (yes, it is still around) bulletin board to purchase offensive T-shirts related to the bombing by calling the home telephone number of Kenneth Zeran. Zeran was immediately deluged with calls and death threats. Zeran called AOL for help, which said that the posting would be removed from the bulletin board but that AOL would not print a retraction. Nonetheless, additional postings on AOL continued for several days. Zeran repeatedly called AOL for assistance and was told that the offending account would be closed. But by this time, a local radio station relayed the first posting on air and attributed it to “Ken” at Zeran’s number. The harassing and threatening calls increased.

Zeran sued AOL, arguing that AOL should be liable for defamatory speech initiated by a third party. AOL pleaded section 230 as an affirmative defense, and the U.S. Court of Appeals for the Fourth Circuit upheld the district court’s finding of immunity. The court specifically rejected Zeran’s argument that section 230 eliminates only “publisher” liability and not “distributor” liability. The court found that distributor liability is merely a “subset” of publisher liability. The court also rebuffed the argument that liability should be imposed on service providers that have actual knowledge of defamatory content (due to notice), finding that such liability would be impractical to administer and would defeat the fundamental purposes of section 230.

Zeran illustrates the difficult balance of the section 230 framework between individual harm and society’s benefit from an online platform.

Fair Housing Council of San Fernando Valley v. Roommates.com

Fair Housing Council of San Fernando Valley v. Roommates.com involved another unusual set of facts unlikely contemplated by the authors of section 230. Roommate.com (Roommate) operated a website to match people that had rooms to rent with would-be renters, and the website obtained its revenues from advertisers and subscribers. Roommates created a profile series of questions, including questions about sexual orientation, and encouraged users to provide “additional comments.” The underlying litigation was a complaint by the Fair Housing Council that Roommate’s business violated the Fair Housing Act (FHA). Roommate won dismissal at the district court level, relying on section 230 immunity.

The U.S. Court of Appeals for the Ninth Circuit explained that a website operator can be both a “service” provider, i.e., one who “passively” displays content of third parties, or a “content” provider who creates content. Thus, the operator may be liable for some content and have immunity for other content. The court thought that section 230 was meant to immunize the removal of content, not the creation of content. Here, Roommate was found to have “created the questions and choice of answers, and designed [the] website registration process around them” and thus was an “information content provider,” in the court’s opinion. Roommate “passively” displayed the content provided by the subscriber, per the majority, and the Nineth Circuit thus affirmed Roommate’s “immunity” (again, not a term in section 230) under section 230.

The dissent argued that the majority opinion expanded liability for internet service providers and would “chill the robust development of the Internet” and “chill speech on the Internet.” The dissent further argued that the users were providers of the content and that the majority had blurred the definition of “development.” More than a decade later, arguments about the chilling of speech on the internet abound—but, as we will see, from a different viewpoint.

Section 230 and “Hard” Cases

Is there a limit to section 230’s immunity force field of protection for online platforms?

Doe v. Backpage.com, LLC

In Doe v. Backpage.com, LLC,Judge Selya wrote that “[t]his is a hard case . . . in the sense that the law requires that we . . . deny relief to plaintiffs whose circumstances evoke out-rage.” Backpage.com provided an online classified advertising service that included the categories of “Adult Entertainment” and “Escorts.” Three young women who had been minors during the relevant time period brought suit against Backpage.com for facilitating sex trafficking. The suit claimed that the website’s rules and processes helped encourage this despicable practice (by, for example, failing to require phone or email verification). The question presented was whether section 230 shielded Backpage from liability. The district court found that section 230 shielded conduct if the defendant “is a ‘provider or user of an interactive computer service’; . . . the claim is based on ‘information provided by another information content provider’; and . . . the claim would treat [the defendant] ‘as the publisher or speaker’ of the information.”

The First Circuit found that the essential claim of the website facilitating the illegal conduct necessarily treated the website as a publisher or speaker of the content, and thus, Backpage was entitled to section 230(e)(1) protection! The court was not amenable to any argument that Backpage had gone beyond the behavior of Prodigy and AOL in the cases discussed above. The court pointed the appellants toward Congress to seek legislation, and remedial legislation was subsequently passed. But this case serves as a possible precursor to other hard cases that have arisen or will arise. What, then, will be made of such outrageous exceptions?

Batzel v. Smith

Another odd set of facts became the subject of section 230 litigation. In Batzel v. Smith, a handyman apparently had some issue with his customer, a lawyer named Batzel. The handyman overheard an (alleged) conversation in which Batzel said that she had had connections with Hitler’s staff and that she possessed a significant amount of old art that she said she had inherited. The handyman turned war-crime solver crafted an email to a stolen art investigation–related website outlining his concerns. The website, which is used by art thief investigators and operated out of the Netherlands by Ton Cremers, published the email. The handyman later said that he would have not sent the email to Cremers’s website if he had known that it would be blasted around the internet. Batzel sued all parties involved, including some advertisers on the website.

Cremers raised section 230 in his defense, arguing that the handyman’s email was “information provided by another information content provider.” Hence, Cremer claimed that he could not be sued for “publishing” it on the internet under section 230. Judge Berzon and the majority agreed, explaining that Cremers did no more than select and make insignificant changes to the email in question. Simply, the majority read the “26 words” literally.

The dissent mounted the argument that the majority went far beyond what the Congress intended, and now people will be able to “spread vicious falsehoods” on the internet with immunity. Judge Gould in his dissent explained that

Congress understood that entities that facilitate communication on the Internet—particularly entities that operate e-mail networks, “chat rooms,” “bulletin boards,” and “listservs”—have special needs. The amount of information communicated through such services is staggering. Millions of communications are sent daily. It would be impossible to screen all such communications for libelous or offensive content.

Judge Gould would implement section 230 under the following test.

Similarly, the owner, operator, organizer, or moderator of an Internet bulletin board, chat room, or listserv would be immune from libel suits arising out of messages distributed using that technology, provided that the person does not actively select particular messages for publication.

On the other hand, a person who receives a libelous communication and makes the decision to disseminate that message to others—whether via e-mail, a bulletin board, a chat room, or a listserv—would not be immune.

As the majority noted at the outset of the opinion, Congress has chosen to treat liability for defamation and obscenity differently in “cyberspace” than in the “brick and mortar world.” This policy decision can present some seemingly odd results, whereby someone may be liable for defamation for mailing a stamped letter to numerous people but have immunity if they communicate the same information via the internet, which raises the following questions.

Publisher Liability Versus Distributor Liability

Malwarebytes, Inc. v. Enigma Software Group USA, LLC

Typically, an individual statement regarding the denial of a petition for certiorari does not receive much attention. But Justice Thomas’s statement in Malwarebytes, Inc. v. Enigma Software Group USA, LLC in 2020 warrants close review here. Justice Thomas suggested that courts in section 230 cases have mistakenly confused publisher liability with distributor liability. He explained that, “Traditionally, laws governing illegal content distinguished between publishers or speakers (like newspapers) and distributors (like newsstands and libraries). Publishers . . . could be strictly liable for transmitting illegal content. But distributors were . . . liable only when they knew (or constructively knew) that content was illegal.”

Justice Thomas’s discussion of Stratton Oakmont, Inc. v. Prodigy Services Co. and the legislative history surrounding Congress’s use (or lack thereof) of the terms publisher” and distributor” in section 230 and other Communications Decency Act provisions is quite provocative. He raises concerns about “extending § 230 immunity beyond the natural reading of the text” and comments that the court should decide the “correct interpretation of § 230” in the future.

In re Facebook

The Texas Supreme Court later noticed Justice Thomas’s statement and discussed it at length in its In re Facebook, Inc. & Facebook, Inc. opinion issued during the summer of 2021. This case included a set of facts that are unfortunately reminiscent of the horrible facts in the Backpage.com case discussed above. Facebook sought dismissal of three separate cases brought by alleged victims of sex trafficking when they were minors. The victims became ensnared in the trap of the perpetrators via the tools of Facebook and Instagram (owned by Facebook). The relators’ attorneys sought dismissal, relying on section 230. The Texas Supreme Court denied this request after engaging in a lengthy review of Justice Thomas’s statement in the Malwarebytes case.

Facebook had moved to dismiss, citing 47 U.S.C. § 230(e)(3), which provides that “[n]o cause of action may be brought, and no liability may be imposed under any State or local law that is inconsistent with this section.” Facebook argued that the plaintiffs’ claims are “inconsistent with” the primary provision under discussion in this article, section 230(c)(1).

The court strongly rejected this argument, saying “We do not understand section 230 to ‘create a lawless no-man’s-land on the Internet’ in which states are powerless to impose liability on websites that knowingly or intentionally participate in the evil of online human trafficking.” The Texas Supreme Court relied in part on the Roommates.com decision discussed above, stating:

Holding internet platforms accountable for the words or actions of their users is one thing, and the federal precedent uniformly dictates that section 230 does not allow it. Holding internet platforms accountable for their own misdeeds is quite another thing. This is particularly the case for human trafficking. Congress recently amended section 230 to indicate that civil liability may be imposed on websites that violate state and federal human-trafficking laws.

Furthermore, the court quoted Roommates.comfor the proposition that “[a] defendant that operates an internet platform ‘in a manner that contributes to,’ or is otherwise ‘directly involved in,’ ‘the alleged illegality’ of third parties’ communication on its platform is ‘not immune.’”

So, Is It Time to Review Section 230?

While these cases pose interesting dilemmas for litigants and courts, is it time to review section 230? Given the amount of discussion in the media and in D.C. and state capitols, the answer appears to many to be a resounding yes! But is it? Does the sample of cases summarized above warrant such further review? Or is this current debate motivated by other reasons (or both)?

The above cases presented novel situations, but any candid observer would find that the current controversy generally centers on the cause célèbre of Big Tech and its control over what content appears on its respective platforms. This debate takes on a strongly political flavor, as some conservatives state that the internet is slanted against their views and liberals argue that platforms are protecting society from incorrect and/or inciting messaging.

As noted above, there is no real debate that Big Tech is big. It is big in many ways. This article assumes this to be the case (since this is not an antitrust complaint/brief). For example, in terms of market capitalization, Amazon, Apple, Google, and Microsoft easily exceed $1 trillion each. A bit more startling is the fact that the “Big Five” of Big Tech—Apple, Amazon, Alphabet (Google), Facebook, and Microsoft—make up about 20 percent of the total value of the stock market!

But these financial facts may be a bit esoteric. More practical tests of size include the following.

  • During COVID, where do the millions of consumers “go” to buy something every day? Amazon.
  • How do billions of people stay connected to Grandma or old high school classmates in another city? Facebook.
  • Where do millions of people go to vent their opinions in a few words? Twitter.
  • From what company do millions buy multiple smartphones, EarPods, PCs, desktops, notebooks, and smart watches year after year? Apple.
  • If you are going to create a document or presentation for school or work, what software do you use? Microsoft.
  • How do millions search online for the latest information on the pandemic or the bio of the star of your favorite show to binge? Google.

Indeed, some have come to call these platforms “digital nation states.”

On top of this size issue is the argument that section 230 issues raise constitutional arguments, and some claim that the provision “is the most important law protecting free speech.” But the issue really became inflamed in the context of the political speech and social media mediation decisions. Several months before the tragic events of January 6, 2021, President Trump and other conservatives had raised issues about unfair “censorship” of their views by social media platforms such as Twitter. Trump even signed an executive order directing federal agencies to review “social media censorship,” which stated in part:

Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.

As President, I have made clear my commitment to free and open debate on the internet. Such debate is just as important online as it is in our universities, our town halls, and our homes. It is essential to sustaining our democracy.

Online platforms are engaging in selective censorship that is harming our national discourse. Tens of thousands of Americans have reported, among other troubling behaviors, online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to company policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse.

A few weeks later, Representative Devin Nunes’s lawsuit against Twitter was dismissed due to section 230 “immunity.” Nunes had claimed that Twitter had orchestrated a nefarious scheme to silence his voice and assassinate his character by enabling the publication of several false and defamatory statements against him via satirical anonymous accounts.

Another episode in this controversy was a “workshop” conducted by Attorney General William Barr’s Department of Justice (DOJ) in February 2020, which was followed by a report with recommendations to Congress as to section 230. Of course, with the change in the presidential administration, this report may not carry as much (if any) weight. But it is still somewhat instructive as to ideas about what to do with section 230.

It is also worth noting here that then-candidate Joe Biden called for the repeal of section 230, telling the New York Times Editorial Board that “Section 230 should be revoked, immediately should be revoked, number one.” There was also this question-and-answer exchange:

CW: That’s a pretty foundational laws[sic] of the modern internet.

[Biden:] That’s right. Exactly right. And it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.

Academics can be found on all sides of the issue of whether to reboot section 230 and, if so, how. Some argue “that there is a growing consensus that we need to update Section 230.” In his book, Gillespie highlights three considerations for the calls to review section 230.

  • The “safe harbor” law was not designed for the social media platforms, which benefit from it today.
  • Section 230 laws are limited to the United States, and platforms are international.
  • Terrorism and hate speech are placing higher stakes on the debate.

Other Issues Stoking the Section 230 Debate

Two huge events have added even more fuel to this fire: the January 6, 2021, Capitol riot and the COVID-19 pandemic.

Section 230 Flash Point Mob

All of this served as a prelude the decision of Twitter, Facebook, and YouTube to suspend/revoke (i.e., “deplatform”) former President Trump’s accounts over election-related and other claims (regarding, e.g., COVID-19). This development, along with the horrible day in January 2021, amplified the debate over section 230. Congress has held a series of hearings with the relevant CEOs. Bills in various states (to be discussed below) started to appear regarding digital platforms, social media platforms, and censorship. And an internal review board issued a report on Facebook’s actions regarding Trump. The board upheld Facebook’s decision to restrict Trump’s access to posting content on his Facebook page and Instagram account, but the board also found that it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. (Facebook later modified the suspension to two years.)

Just two weeks after the riot at the Capitol, the Congressional Research Service issued a report, Social Media: Misinformation and Content Moderation Issues for Congress. The report concluded that if Congress decides to address the issue of misinformation or moderation, it might consider:

  • the “scope of proposed actions, under what conditions they would be applied, and the range of . . . legal, social, and economic consequences”;
  • “costs . . . that further entrench[] the market power of incumbent[s]”; and
  • “how U.S. actions . . . fit within an international legal framework.”

Following the deplatforming, Trump filed three class action lawsuits in July 2021 against Twitter, Facebook, and Google/YouTube, respectively. Trump’s two basic complaints against each company include the following.

  • The defendant reacts to “coercive pressure from the federal government to regulate specific speech,” which amounts to “state action” and violates the Class Member’s First Amendment rights to participate in a “public forum.”
  • Section 230 is “unconstitutional on its face” because Congress cannot “induce, encourage . . . private persons to accomplish what it is constitutionally forbidden to accomplish.”

Like many Trump-related issues, the merits and possible success of Trump’s lawsuits (beyond attracting even more attention to the issue) have generated polar viewpoints.

The Pandemic and Section 230

COVID has wreaked havoc on all of us in so many ways. One issue that has arisen relative to section 230 is the issue of censorship of COVID-19 misinformation by social media platforms. Information that has been censored from the internet has ranged from theories on the origin of the disease, severity of treatments (e.g., medicines), and possible cures.

This censorship has taken place in a very volatile situation where theories and government-recommended approaches to the disease change as events unfold. As the Congressional Research Service report noted:

[p]art of the difficulty addressing COVID-19 misinformation is that the scientific consensus about a novel virus, its transmission pathways, and effective mitigation measures is constantly evolving as new evidence becomes available. During the pandemic, the amount and frequency of social media consumption increased. Information about COVID-19 spread rapidly on social media platforms, including inaccurate and misleading information, potentially complicating the public health response to the pandemic.

There have even been reports of government coordination with platforms on these important issues and finger-pointing between the two entities. Senator Amy Klobuchar filed a bill that would penalize platforms for “spreading lies” about COVID-19. One would think it would be a weighty proposition for a company to decide (whether by assigned moderators/people or by algorithms) what is or is not accurate as to complex diseases, much less the multitude of other issues that appear on their platforms daily.

State Action

But not all the section 230 action is in Washington, D.C., or in the courts. The agendas at state capitol buildings around the nation have been filled with legislation relevant to the section 230 debate, arising from perceived censorship and the power of the major digital/social media platforms. As will be seen, generally these efforts have not reached fruition without controversy. This article focuses on two battleground states: Florida and Texas.

Florida

Florida passed Senate Bill 7072, which was supposed to take effect on July 1, 2021:

  • The bill establishes a violation for social media deplatforming of a political candidate or journalistic enterprise and requires a social media platform to meet certain requirements when it restricts speech by users. The bill prohibits a social media platform from willfully deplatforming a candidate for political office and allows the Florida Elections Commission to fine a social media platform $250,000 per day for deplatforming a candidate for statewide office and $25,000 per day for deplatforming any other candidate, in addition to the remedies provided in chapter 106 of the Florida Statutes. If a social media platform willfully provides free advertisements for a candidate, such advertisement is deemed an in-kind contribution, and the candidate must be notified.
  • The bill provides that a social media platform that fails to comply with the requirements under the bill may be found in violation of the Florida Deceptive and Unfair Trade Practices Act by the Department of Legal Affairs (Attorney General).
  • The bill permits a user of a social media platform to bring a private cause of action against a social media platform for failing to apply consistently certain standards and for censoring or deplatforming without proper notice.

The bill was met with criticism (reflective of this controversy) and litigation. Before the bill was even able to take effect, Judge Hinkel issued a preliminary injunction in NetChoice v. Moody.Judge Hinkel identified many legal deficiencies in SB 7072, ruling that “the plaintiffs are likely to prevail on their challenge to the preempted provisions—to those applicable to a social media platform’s restriction of access to posted material.”

Hinkel made other observations, such as: “The First Amendment does not restrict the rights of private entities not performing traditional, exclusive public functions.” He then applied strict scrutiny for his review of First Amendment claims, finding that SB 7072 is content-based legislation writing.

To survive strict scrutiny, an infringement on speech must further a compelling state interest and must be narrowly tailored to achieve that interest. See, e.g., Reed, 576 U.S. at 171. These statutes come nowhere close. Indeed, the State has advanced no argument suggesting the statutes can survive strict scrutiny. They plainly cannot.

Texas

My home state of Texas is also a setting for section 230–related legislation. Texas had at least “two bites at the apple” before finally adopting House Bill 20. The bill’s general purpose is to establish complaint procedures and disclosure requirements for social media platforms and the censorship of users’ expressions by an interactive computer service. The bill includes requirements such as publication of “transparency” reports regarding the platform’s mediation efforts. The bill also focuses on “viewpoint discrimination.” A key section in the bill on censorship provides:

Sec. 143A.002. CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on:

(1) the viewpoint of the user or another person;

(2) the viewpoint represented in the user’s expression or another person’s expression; or

(3) a user’s geographic location in this state or any part of this state.

(b) This section applies regardless of whether the viewpoint is expressed on a social media platform or through any other medium.

NetChoice also challenged the Texas law, filing a complaint in September 2021 in federal district court in Austin. The complaint points to Judge Hinkel’s ruling on the Florida law for support. The complaint also alleges that H.B. 20 violates the First Amendment, is void for vagueness under the Due Process Clause of the Fourteenth Amendment, violates the Commerce Clause, is preempted under the Supremacy Clause and section 230, and violates the Equal Protection Clause of the Fourteenth Amendment.

The Texas social media law met the same fate as the Florida bill when Judge Pitman issued a preliminary injunction blocking the law from taking effect on December 2, 2021. The judge cited the Florida ruling. Judge Pitman found that H.B. 20 violated the First Amendment, many terms in the bill were “vague,” and it discriminated against Big Tech social media platforms. The court also rejected the state’s “common carrier” argument and ruled that the severability clause did not save other provisions in the bill. One of the court’s observations was about the impracticality of provisions regarding transparency and a user appeals process given the enormous amount of traffic that flows on these platforms every day. This premise and these rulings present serious challenges to legislators seeking to impose some sort of restrictions on these platforms. The state has indicated it will appeal the ruling to the Fifth Circuit Court of Appeals.

The Future of State Initiatives

If these various initiatives fail, it would not be surprising to witness their return in future state sessions as controversial bills often take more than one session to pass or finally die. If passed, as demonstrated in Florida and Texas, subsequent litigation is all but assured.

What Changes Should Be Made to Section 230

Setting aside the (in my view, unlikely) nuclear option of striking section 230 from the U.S. Code, what are some possible changes that could be made to section 230 in light of the above considerations?

In September 2020, the Barr DOJ Report mentioned above recommended draft legislation that:

  • “has a series of reforms to promote transparency and open discourse and ensure that platforms are fairer to the public when removing lawful speech from their services”;
  • “[e]xplicitly overrule[s] Stratton Oakmont to [a]void [m]oderator’s [d]ilemma . . . [by] clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service”;
  • outlines a “category of amendments aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims”; and
  • “proposes carving out certain categories of civil claims that are far outside Section 230’s core objective, including offenses involving child sexual abuse, terrorism, and cyberstalking.”

Danielle Keats Citron and Benjamin Wittes believe that 47 U.S.C. § 230(c)(1) immunity is “too sweeping,” and they have suggested this new language (in italics):

No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

Mark Zuckerberg, praising section 230 for its promotion of the internet, has offered some suggestions to modify section 230 in testimony before Congress while defending Facebook’s “misinformation” and “hate speech” identification efforts:

We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content. Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right. In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

Michael D. Smith and Marshall Van Alstyne describe such language as a “duty of care” standard. Neil Fried argues that section 230 removed the ordinary business standard to act with a duty of care toward customers/users.

Ordinarily, businesses have a common law duty to take reasonable steps to not cause harm to their customers, as well as to take reasonable steps to prevent harm to their customers. That duty also creates an affirmative obligation in certain circumstances for a business to prevent one party using the business’s services from harming another party. Thus, platforms could potentially be held culpable under common law if they unreasonably created an unsafe environment, as well as if they unreasonably failed to prevent one user from harming another user or the public.

Section 230(c)(1), however, states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have concluded that this provision “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”

In short, Fried posits that section 230 has created a disincentive for platforms to moderate content and recommends that Congress:

  • “amend Section 230 to require that platforms take reasonable steps to curb unlawful conduct as a condition of receiving the section’s liability protections”; and
  • “create transparency provisions requiring platforms to adopt and disclose content moderation policies addressing (1) what content the platforms will take down and leave up; (2) how people can file complaints about deviations from those policies; (3) how people can appeal the platforms’ decisions under those policies; and (4) disclosure of aggregated data regarding complaints, takedowns, denial of takedown requests, and appeals.”

As if there were a need to bring any further attention to this issue, a “whistleblower” came forward in the fall of 2021 and provided internal Facebook documents to the Wall Street Journal, which published an intensive series of articles—called “The Facebook Files”—critical of Facebook’s practices, business model, and impact on society. The whistleblower then appeared before congressional committees. This kept, if not brightened, the spotlight on section 230.

It remains to be seen what Congress will actually do on this important national issue.

What Next?

Assuming that Congress has not addressed section 230 further by the time of publication, I suspect that the section 230 controversies will continue to thrive at both the federal and state levels and in various legislative, judicial, and political forums. A couple of years ago, I published an article in this publication stating that “it may be difficult to move NN (Net Neutrality) off its perch at the top of the regulatory box office.” I think that it is probably fair to say that there is a new Number One in this box office—section 230 (perhaps with the privacy issue not far behind).

In the meantime, in its In re Facebook opinion, the Texas Supreme Court fairly summarized some of the basic considerations going forward.

The internet today looks nothing like it did in 1996, when Congress enacted section 230. The Constitution, however, entrusts to Congress, not the courts, the responsibility to decide whether and how to modernize outdated statutes. Perhaps advances in technology now allow online platforms to more easily police their users’ posts, such that the costs of subjecting platforms like Facebook to heightened liability for failing to protect users from each other would be outweighed by the benefits of such a reform. On the other hand, perhaps subjecting online platforms to greater liability for their users’ injurious activity would reduce freedom of speech on the internet by encouraging platforms to censor “dangerous” content to avoid lawsuits. Judges are poorly equipped to make such judgments, and even were it otherwise, “[i]t is for Congress, not this Court, to amend the statute if it believes” it to be outdated.

Congress, what say you?

    Author