chevron-down Created with Sketch Beta.
March 01, 2017

Section 230 as Gatekeeper: When Is an Intermediary Liability Case Against a Digital Platform Ripe for Early Dismissal?

When is an intermediary liability case against a digital platform ripe for early dismissal?

Jeff Hermes

Download a printable PDF of this article.

A new case lands on your desk. You give the complaint a quick read, and it seems your client is being sued because an online platform or service the client operates was used as the medium for a third party’s allegedly nefarious deeds. Being an attorney savvy in the ways of digital communication, your mind immediately leaps to section 230(c)(1) of the Communications Decency Act, 47 U.S.C. § 230(c)(1), the alternatively praised and reviled federal law from 1996 that insulates “interactive computer services” from being treated “as the publisher or speaker of any information provided by another information content provider.” (Another part of section 230 specifically protects active moderation of user content (47 U.S.C. § 230(c)(2)), but we focus here on the general immunity provided by section 230(c)(1).)

For more than 20 years, section 230 (also referred to as the “CDA”) has been one of the fundamental legal principles that allows online services as we know them today to exist, by enabling digital platforms to host a never-ending flow of user-generated content without the burden of acting as gatekeeper for each and every comment. But some judges remain highly skeptical of whether the broad immunity granted by section 230 is either fair or wise, and practitioners have been surprised by a number of recent rulings in favor of plaintiffs. In this environment, developing a successful strategy for defending a section 230 case takes more than a bare invocation of the statute.

Discussing a series of key questions will shed some light on the types of cases that might be disposed of under section 230 on a motion to dismiss or an anti–SLAPP (strategic lawsuit against public participation) motion, as opposed to those that are better litigated with summary judgment (or, gasp, trial) in mind.

Does Your Case Fall Within an Exception to the Scope of Section 230?

The place to start is whether section 230 applies to your case at all. The statute does not preclude claims falling within certain enumerated exceptions found in section 230(e) including those sounding in (1) federal criminal law, (2) intellectual property law, (3) state law, and (4) communications privacy law.

Not all of these present significant limitations on the protection offered by section 230. Much to the chagrin of state law enforcement, the third exception is more or less toothless: It provides that state law claims are not preempted by section 230 to the extent—and only to the extent—that they are consistent with section 230. Thus, the third exception is less a carve-out from the protection of section 230 and more a reinforcement of Congress’s general intent to preempt conflicting state laws. You are also unlikely to run afoul of the fourth exception, which allows claims to proceed under the Electronic Communications Privacy Act and similar state laws (e.g., state wiretapping and interception laws). These claims are usually brought against the party who conducted an illegal interception, not third parties who might have repeated any information obtained. Such third-party claims are limited by Bartnicki v. Vopper, 532 U.S. 514 (2001), which held that the First Amendment protects publication of illegally obtained information on matters of public concern so long as the publisher was not involved in the initial illegal acquisition.

It is far more probable that you would need to parse the first two exceptions. The federal criminal law exception is fairly straightforward, covering crimes such as those set forth in 18 U.S.C. § 1591 (sex trafficking, including through online advertisements) and 18 U.S.C. § 2252A (distribution of child pornography). But what about civil causes of action created as part of federal statutes that are primarily criminal in nature? There is no definitive nationwide answer—the Supreme Court has never addressed the reach of section 230—but in March 2016, the U.S. Court of Appeals for the First Circuit roundly rejected an attempt to apply the federal criminal law exception to a civil claim under 18 U.S.C. § 1591. See Doe No. 1 v., 817 F.3d 12 (1st Cir. 2016), cert. denied, 137 S. Ct. 622 (2017). In other words, only federal criminal prosecutions are excluded from section 230 under this exception.

What about the intellectual property exception? If you are in federal court in the Ninth Circuit, fairly clear precedent suggests that it applies only to federal intellectual property claims, such as claims under the Copyright Act or trademark claims under the Lanham Act, and not to state law intellectual property claims, such as state trademark claims or right of publicity claims. See Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, 1118–19 (9th Cir. 2007). If the complaint does include federal intellectual property claims, section 230 will not apply to those claims. However, you might be able to look to alternative forms of protection, such as the safe harbor provisions of the Digital Millennium Copyright Act (DMCA), 17 U.S.C. § 512, or limitations on secondary trademark liability, see, e.g., Tiffany Inc. v. eBay, Inc., 600 F.3d 93 (2d Cir. 2010).

But courts in several other jurisdictions, including at least one California court, have read section 230 as permitting state law intellectual property claims to proceed. See, e.g., Universal Commc’n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 422–23 (1st Cir. 2007) (stating in dicta that Florida trademark dilution claim not barred by section 230); Atl. Recording Corp. v. Project Playlist, Inc., 603 F. Supp. 2d 690, 703–04 (S.D.N.Y. 2009) (finding N.Y. common-law copyright claim not barred by section 230); Cross v. Facebook, Inc., Civ. No. 537384, slip op. at 5 (Cal. Super. Ct. May 31, 2016) (finding California right of publicity claim not barred by section 230).

Right of publicity claims deserve special attention in this context. The right of publicity is frequently defined as a form of intellectual property based on the accumulated value of one’s persona and thus might be permitted to proceed despite section 230. See Cross, No. 537384, slip op. at 5 (treating right of publicity claim as intellectual property claim for purposes of section 230); Doe v. Friendfinder Network, Inc., 540 F. Supp. 2d 288, 302–03 (D.N.H. 2008) (same). There is, however, another theory of the tort that sounds in privacy. This version of the tort considers whether the public use of one’s name, image, or persona is offensive to one’s preference not to be drawn into public discussion, and not whether the “appropriation” of the identity is commercial in nature. Tort authority William L. Prosser discussed such claims as the last of his classic four privacy torts. See William L. Prosser, Privacy, 48 Cal. L. Rev. 383, 401–07 (1960).

If you can establish that the plaintiff’s claim derives from embarrassment or a desire to be left alone, rather than an attempt to recoup economic loss, you may be able to argue that the claim is actually asserting a privacy theory. See Stacey L. Dogan & Mark A. Lemley, What the Right of Publicity Can Learn from Trademark Law, 58 Stanford L. Rev. 1161, 1208–10 (2006) (distinguishing privacy-based theories of the right of publicity from intellectual property theories). In contrast to intellectual property claims, privacy claims may be barred by section 230. However, courts analyzing rights of publicity have rarely discussed this distinction. You should carefully review how your jurisdiction characterizes rights of publicity to see whether there is room for this argument.

How Substantive Are Allegations of Responsibility?

In the typical section 230 case, the defendant operates a digital platform and is alleged to be responsible for the content of user-generated comments or submissions—for example, Twitter being sued due to a user’s tweet or a news website sued over a comment posted in response to a particular article. These claims frequently sound in defamation but can depend on just about any theory of content-based liability (subject to the exceptions discussed above), including privacy, negligence, and unfair competition.

These types of claims present the paradigmatic case for application of section 230(c)(1), which protects a “provider or user of an interactive computer service” from being “treated as the publisher or speaker of any information provided by another information content provider.” In other words, a website or digital platform generally cannot be held responsible for content created by a third party. This fact pattern mirrors closely the circumstances of Stratton Oakmont, Inc. v. Prodigy Services Co., No. 31063/94 (N.Y. Sup. Ct. Mar. 10, 1995), the defamation case that triggered the addition of section 230 to the CDA. In that case, online service Prodigy was held liable for failing to remove allegedly defamatory content despite the fact that it engaged in some attempt to moderate user submissions. Congress felt that the case raised structural concerns for the operation of open Internet platforms—namely, that imposition of liability on a platform would require it to undertake the impracticable task of vetting user-submitted comments on a case-by-case basis.

But section 230 provides protection only with respect to third-party content; content created by a digital platform’s own staff is not covered. In the parlance of the statute, an “information content provider”—i.e., “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service,” 47 U.S.C. § 230(f)(3)—is not protected for the content it “creates or develops.”

As a result, a plaintiff will almost universally allege that your client is not merely an “interactive computer service” but is in some sense an “information content provider” with respect to the material at issue. A plaintiff’s failure to include such allegations, or reliance on mere conclusory allegations, has been found to be grounds for dismissal on a Rule 12(b)(6) motion in jurisdictions following the Iqbal/Twombly standard. See Kimzey v. Yelp! Inc., 836 F.3d 1263, 1268 (9th Cir. Sept. 12, 2016) (“We have no trouble in this case concluding that threadbare allegations of fabrication of statements are implausible on their face and are insufficient to avoid immunity under the CDA.”) (citing Ashcroft v. Iqbal, 129 S. Ct. 1937 (2009), and Bell Atl. Corp. v. Twombly, 127 S. Ct. 1955 (2007)). That is to say, unless the plaintiff includes specific factual allegations regarding how your client created or developed content, section 230 will generally support dismissal of the plaintiff’s claim.

However, in courts following a more permissive pleading standard, the likelihood of winning a 12(b)(6) motion on a section 230 defense is much reduced; courts in these jurisdictions have found that even relatively superficial allegations can survive a motion to dismiss. See, e.g., J.S. v. Vill. Voice Media Holdings, L.L.C., 359 P.3d 714, 717–18 (Wash. 2015) (denying motion to dismiss). Moreover, even under Iqbal and Twombly, the plaintiff’s factual allegations will be presumed to be true; the question is not whether those assertions are believable, but whether the plaintiff has asserted sufficient facts to establish a plausible theory of liability that does not depend on conclusory allegations or facts that might develop later. Iqbal, 129 S. Ct. at 1949–50; Twombly, 127 S. Ct. at 1965–66. Compare Huon v. Denton, No. 15-3049, 2016 U.S. App. LEXIS 20433, at *17–18 (7th Cir. Nov. 14, 2016) (concluding that plaintiff’s detailed allegations that website’s staff wrote user comments under pseudonyms might be unlikely but must be treated as true), with Silver v. Quora, Inc., No. 1:15-cv-00830, slip op. at 6 (D.N.M. June 13, 2016) (on motion to dismiss, rejecting conclusory assertions that website employees wrote user comments), aff’d, 666 F. App'x 727 (10th Cir. 2016).

A plaintiff may therefore attempt to include sufficiently detailed allegations in the complaint to overcome this hurdle. But, depending on the specific nature of those allegations, a motion to dismiss might still be possible. These allegations generally fall into three categories: (1) claims that your client knew about illegal content; (2) claims that your client solicited, encouraged, or approved of offensive material; and (3) claims that your client in fact created the illegal material that it attributes to its users. Each of these will be discussed below.

Does the plaintiff allege that your client knew that user content was illegal? Plaintiffs will often fault digital platforms for failing to take action when notified or aware that particular user content violates the law. This is sometimes known as a “distributor liability” theory. The argument is usually that while section 230 prevents treating a platform as a “publisher,” there is in the common law a distinction between “publishers” and “distributors.” Publishers (such as newspaper companies) were presumed to know about the content of their publications, whereas distributors (such as bookstores) could only be held liable for the content they distributed if knowledge of that content was proven.

Accordingly, plaintiffs (particularly in the early days of section 230) argued that the statute merely prevented courts from presuming that digital platforms knew about offensive content but that platforms could be held liable if proven to have knowledge as a distributor. See, e.g., Zeran v. Am. Online, Inc., 129 F.3d 327, 331–33 (4th Cir. 1997) (discussing and rejecting distributor liability theory). The distributor liability theory rarely found favor (for an example of where it did, see Grace v. eBay, Inc., 16 Cal. Rptr. 3d 192, 198–99 (Ct. App. 2004) (depublished, 99 P.3d 2 (Cal. 2004)) and is now largely considered extinct, particularly following a thorough examination of the issue by the California Supreme Court in Barrett v. Rosenthal, 146 P.3d 510 (Cal. 2006).

The result is that cases based on alleged notice of illegality or a distributor liability theory have been dismissed on Rule 12(b)(6) and anti-SLAPP motions. See Silver, No. 1:15-cv-00830, slip op. at 7–8 (rejecting distributor liability theory on motion to dismiss); Glob. Royalties, Ltd. v. Xcentric Ventures, LLC, 544 F. Supp. 2d 929, 931–32 (D. Ariz. 2008) (holding allegations of notice of illegal content insufficient to overcome section 230 and granting motion to dismiss); Eckert v. Microsoft Corp., 2007 U.S. Dist. LEXIS 15295, at *8–9 (E.D. Mich. Jan. 8, 2007) (same); Hupp v. Freedom Commc’ns, Inc., 221 Cal. App. 4th 398, 404 (2013) (following Barrett and upholding grant of anti-SLAPP motion on distributor liability claim).

Does the plaintiff allege that your client encouraged or solicited user content? If instead of alleging failure to act, the plaintiff alleges some affirmative involvement by your client with user content, the analysis becomes more complicated. The leading case on this issue is Fair Housing Authority of San Fernando Valley v., LLC, 521 F.3d 1157 (9th Cir. 2008), in which the Ninth Circuit held that “a website helps to develop unlawful content, and thus falls within the exception to section 230, if it contributes materially to the alleged illegality of the conduct.” Id. at 1168. Under this interpretation of section 230, it is not enough that your client might have solicited, encouraged, ratified, or even paid for the creation of particular content, so long as it did not specifically solicit, etc., the presence of unlawful content in the user’s submission.

There has been some confusion on this point, stemming from both Roommates itself and the subsequent decision of the Tenth Circuit in FTC v. Accusearch Inc., 570 F.3d 1187 (10th Cir. 2009), which purported to follow In the former case, the Ninth Circuit suggested that providing “neutral tools” that users might use illegally of their own accord does not amount to creating or shaping user-generated content., 521 F.3d at 1169. In Accusearch, the Tenth Circuit held that “a service provider is ‘responsible’ for the development of offensive content only if it in some way specifically encourages development of what is offensive about the content.” Accusearch, 570 F.3d at 1199.

Based on these statements, plaintiffs have argued that websites can receive section 230 protection only if they act as mere conduits for user submissions, and neither express opinions nor encourage particular types of discussions. Some courts have accepted this argument. See, e.g., J.S. v. Vill. Voice, 359 P.3d at 717–18 (“J.S. alleged facts that, if proved true, would show that Backpage did more than simply maintain neutral policies prohibiting or limiting certain content.”). But the Ninth Circuit’s discussion of “neutral tools” referred to user interfaces that allegedly directed users to include illegal materials, not to whether the attitude of the platform or its operators was in some sense unbiased., 521 F.3d at 1169. And the Tenth Circuit’s reference to encouraging “offensiveness” was not in the colloquial sense of that term; rather, in context the term referred to encouraging a legal “offense.” See Accusearch, 570 F.3d at 1200 (“By paying its researchers to acquire telephone records, knowing that the confidentiality of the records was protected by law, it contributed mightily to the unlawful conduct of its researchers.”).

The Sixth Circuit’s opinion in Jones v. Dirty World Entertainment Recordings LLC, 755 F.3d 398 (6th Cir. 2014), provides a better interpretation of these cases. In Jones, the court held that a website that encouraged distasteful and colloquially offensive gossip, but did not actually encourage the posting of false or illegal statements, could not be held responsible for allegedly defamatory content posted by its users. Id. at 414–15. The court also rejected liability on the basis of the alleged “adoption” or “ratification” of third-party submissions. Id. at 415. See also Seaton v. TripAdvisor LLC, 728 F.3d 592, 599 & n.8 (6th Cir. 2013) (finding travel website that compiled “2011 Dirtiest Hotels” list based on user-submitted reviews not liable for defamation; list itself was protected as opinion while section 230 prevented website from being held liable for underlying reviews).

Even before Jones, motions to dismiss and for judgment on the pleadings have been successful where platforms have been accused of setting up forums for negative—but not necessarily actionable—content. See, e.g., Dart v. Craigslist, Inc., 665 F. Supp. 2d 961, 968–69 (N.D. Ill. 2009) (allegation that Craigslist was responsible for ads for illegal sex services because it hosted an “adult services” category did not overcome section 230; motion on the pleadings granted); Glob.Royalties, 544 F. Supp. 2d at 933 (fact that website encouraged posting of potentially damaging consumer reviews did not render site responsible for defamatory user posts; motion to dismiss granted).

Does the plaintiff allege that your client created unlawful content or required users to do so? As discussed above, section 230 offers no protection for an online platform’s own content. Direct allegations that a platform’s own personnel created content that appeared to come from users, or set up systems that required third parties to post unlawful material, have been significantly more successful in avoiding an early section 230 dismissal.

In, the Ninth Circuit held that the shaping of user submissions through an online form that required them to express roommate preferences that were allegedly illegal under the Fair Housing Act would not be protected by section 230. In Accusearch, the Tenth Circuit held that section 230 did not protect instructing third parties to gather data for publication while knowing they must violate privacy laws to do so. In Huon, 841 F.3d at 742, the Seventh Circuit held that allegations that a website’s employees actually wrote some user comments would survive a motion to dismiss. And in Enigma Software Group USA, LLC v. Bleeping Computer LLC, 194 F. Supp. 3d 263, 274 (S.D.N.Y. 2016), the Southern District of New York held that plausible allegations of an agency relationship between a website and a user who posts defamatory content (in that case, arising out of the site’s delegation of moderation authority to a specific user) could survive section 230.

Note that in the midst of analyzing whether your client was involved in content creation generally, it is easy for courts to overlook whether your client is alleged to have created the specific statements about which a plaintiff is complaining. For example, in Huon, while the court held that allegations that Gawker’s employees wrote user comments were sufficient to avoid a motion to dismiss, it never considered whether they were alleged to have written the single user post that the court found might be defamatory. 841 F.3d at 742–43.

A similar example is FTC v. LeadClick Media, LLC, 838 F.3d 158 (2d Cir. 2016), a summary judgment case in which the Second Circuit held that digital ad network LeadClick was responsible for false advertisements created for its clients by third-party content developers. The court focused on LeadClick’s pervasive involvement in the placement of the ads in question and its role in passing communications back and forth between its clients and the third parties creating the ads, but the evidence was at best ambiguous as to whether LeadClick itself “created or developed” all of the allegedly deceptive material for which it was held liable. See id. at *8–12. Particularly where the facts are complex or murky, it is important not to lose sight of the ultimate issue of whether your client is alleged to have “created or developed” specific unlawful content.

You should also give special attention to claims that users were acting as your client’s agents when posting content, as in Enigma Software. Federal courts normally look to state law for the scope of agency relationships, making this one of the very few circumstances where state law could potentially define the scope of section 230. However, it is unlikely that Congress intended to delegate to the states the ability to limit section 230 through unusual or subject-specific local laws governing employment or agency, and you should be prepared to argue that section 230 preempts attempts to work around its protection in that manner.

In any event, where the complaint alleges facts that could support a finding that your client participated in the creation of illegal content, it is worth considering whether it might be better to wait until summary judgment to try to get the case kicked out. As discussed above, even under Iqbal/Twombly, courts are supposed to take the allegations of the plaintiff’s complaint as true, with Federal Rule of Civil Procedure 11 as the backstop for frivolous allegations. Huon, 841 F.3d at 742. When the facts are at all nuanced, judges skeptical of section 230 can take the opportunity to knock back a 12(b)(6) motion with a stiff reminder that the statute’s protection is not without limits. Worse, a ruling denying a motion to dismiss can serve as a road map for a plaintiff whose allegations are weak on how to shore up the complaint through discovery.

Another option, in jurisdictions where the mechanism is available, is to file an anti-SLAPP motion. Anti-SLAPP statutes provide a right to early dismissal of claims targeting the exercise of First Amendment rights. However, not every state has an anti-SLAPP law, not all anti-SLAPP laws apply to the type of claims typically asserted in section 230 cases, and not all such laws that do apply in these cases allow the submission of evidence in support of an anti-SLAPP motion. But an anti-SLAPP law that permits evidentiary submissions (usually akin to an early Rule 56 motion) can offer the best of both worlds, by forcing the plaintiff to back up the allegations of the complaint at an early stage and limiting the discovery faced by the defendant.

It is worth noting that anti-SLAPP laws are creatures of state law (there is currently no federal anti-SLAPP statute), and federal courts disagree about whether state anti-SLAPP laws can apply in federal cases. You might therefore lose this option if you remove to federal court—for example, in order to shift from a state court following a permissive Rule 12(b)(6) standard to a federal court following Iqbal/Twombly.

Fringe Cases

The cases discussed above form the core of section 230 jurisprudence, paralleling to some degree the issues raised by the original Stratton Oakmont decision. In these cases, there is no real question that the plaintiff wants to treat the defendant as a “publisher or speaker” of content; the only issue is whether the defendant is “responsible, in whole or in part, for the creation or development” of the content at issue. But there are also cases in which plaintiffs assert that a platform is not being treated as a “publisher or speaker” at all, despite the claim arising out of the platform’s interaction with its users.

These arguments have found success in the Ninth Circuit in particular. For example, Doe No. 14 v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016), involved a claim that the operator of networking website Model Mayhem negligently failed to warn its customers that criminals were using the site to identify potential victims. The court rejected a section 230 defense, holding the plaintiff did not

seek to hold Internet Brands liable as a “publisher or speaker” of content someone posted on the Model Mayhem website, or for Internet Brands’ failure to remove content posted on the website. . . . Nor does she allege that [her attackers] posted anything to the website. . . . Internet Brands is also not alleged to have learned of the predators’ activity from any monitoring of postings on the website, nor is its failure to monitor postings at issue. Instead, Jane Doe attempts to hold Internet Brands liable for failing to warn her about information it obtained from an outside source about how third parties targeted and lured victims through Model Mayhem.

Internet Brands, 824 F.3d at 851.

An older case, Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009), involved a claim that the site was liable in promissory estoppel for failing to remove a fabricated user profile that a company representative allegedly promised to remove. The Ninth Circuit held that “[c]ontract liability here would come not from Yahoo’s publishing conduct, but from Yahoo’s manifest intention to be legally obligated to do something, which happens to be removal of material from publication.” Id. at 1107.

Airbnb, Inc. v. City & County of San Francisco, No. 3:16-cv-03615 (N.D. Cal. Nov. 8, 2016), involved a preemptive challenge brought by online short-term housing marketplace Airbnb to a city ordinance that “makes it a misdemeanor to collect a fee for providing booking services for the rental of an unregistered unit.” The district court held that section 230 did not apply, stating that the ordinance

in no way treats plaintiffs as the publishers or speakers of the rental listings provided by hosts. . . . [P]laintiffs are perfectly free to publish any listing they get from a host and to collect fees for doing so—whether the unit is lawfully registered or not—without threat of prosecution or penalty under the Ordinance. . . . The Ordinance holds plaintiffs liable only for their own conduct, namely for providing, and collecting a fee for, Booking Services in connection with an unregistered unit.

Airbnb, slip op. at 6 (citation omitted).

In contrast to the “core” cases, these might be termed “fringe” cases because the connection of the plaintiffs’ claims to user-generated content is apparently attenuated. Nevertheless, courts in these cases have sometimes been too quick to dismiss the application of section 230. In Internet Brands, for example, the only relationship allegedly giving rise to a duty to warn was the fact that the website published the plaintiff’s profile, through which her attackers found her. Thus, the plaintiff’s theory of the case necessarily depended on Model Mayhem’s role as a publisher of the content that led criminals to their target.

True, these cases do not seek to hold the defendant liable in defamation or some other traditional publishing liability tort for the specific words of a user’s post, but nothing in the brief text of section 230 suggests that “treatment as a publisher” is limited to holding a defendant responsible for the damage caused by a user’s choice of language. To the contrary, even the Ninth Circuit in Barnes noted that to determine whether section 230 applies, courts “must ask whether the duty that the plaintiff alleges the defendant violated derives from the defendant’s status or conduct as a ‘publisher or speaker.’” 570 F.3d at 1102. A digital platform is also “treated as a publisher” when liability is imposed for its basic status as a publisher of third-party content.

Other courts have been more expansive in their interpretation of section 230. In Doe v. MySpace Inc., 528 F.3d 413 (5th Cir. 2008), the Fifth Circuit rejected a claim that MySpace negligently failed to prevent a minor from interacting with sexual predators via the website:

[Plaintiffs’] claims are barred by the CDA, notwithstanding their assertion that they only seek to hold MySpace liable for its failure to implement measures that would have prevented Julie Doe from communicating with Solis. Their allegations are merely another way of claiming that MySpace was liable for publishing the communications and they speak to MySpace’s role as a publisher of online third-party-generated content.

Id. at 420. See also McDonald v. LG Elecs. USA, Inc., No. RDB-16-1093, slip op. at 8–9 (D. Md. Nov. 10, 2016) (concluding claim that negligently failed to warn purchasers of defective batteries offered for sale via site by third party was barred by section 230, following MySpace and finding that the Fourth Circuit would not follow Internet Brands).

Defeating a “fringe” case on a motion to dismiss can be difficult because the allegations of a complaint will tend to obscure any connection between the claims and your client’s activities as a publisher. That, combined with a judge’s sense that yours is not a typical section 230 case, might suggest waiting until summary judgment when your client can lay out in more detail why the claims offend the statute. The key is to explain exactly where the plaintiff’s theory of the case intersects with publication activity and why liability could deter platforms from carrying users’ speech.

Can policy arguments support your case? Like many famous First Amendment cases, section 230 cases often involve statements that are distasteful and less than sympathetic. Indeed, one of the primary reasons that a section 230 defense is appealing is that it does not require a digital platform to justify its users’ behavior. But that can place a defendant in an awkward position before a judge or jury, if the content in question is such that the court will want to hold someone responsible. A platform that appears to be making money from offensive material makes an attractive target, given that individual users are often difficult to locate or too poor to satisfy a judgment.

Just as First Amendment lawyers have long defended offensive speech by reference to greater principles, so is it important for attorneys representing platforms to consider the broader effects of imposing liability for user-generated content. As discussed above, section 230 exists because Congress was concerned about the consequences of imposing certain kinds of responsibility for the growth of the Internet as a whole. Twenty years of section 230 have proven the power of the statute in fueling digital development, and innovation still continues. Reminding courts of the concerns that drove the adoption of the statute can help to lend strength to your defense even if your client’s specific website or user content are not particularly sympathetic.

The most basic policy argument, mentioned above, is that imposing liability on platforms for user content would put them in the impossible position of needing to review every comment that passes across their systems. Imagine what would happen if Twitter or Facebook could be held responsible for any defamatory content on its system. Perhaps some platforms might be able to create the massive infrastructure to handle the necessary monitoring, but most would either shut down or strictly limit user access to channels of communication. Section 230 cases that would impose a duty to monitor incoming content are sometimes known as “break-the-Internet” cases for that reason, and courts have usually tried to avoid results that would have such a far-reaching impact.

What if the plaintiff is not claiming your client should have blanket liability, but should only be responsible when it has reason to know about illegal content (as in a distributor liability case)? Plaintiffs could argue that a notice-and-takedown system, such as under the DMCA, resolves many of these issues by avoiding the need for proactive monitoring. However, the types of claims covered by section 230 are qualitatively different from copyright claims under the DMCA. While a platform might in some circumstances be able to make its own judgment as to whether a user’s post infringes a third party’s copyright, it is rarely, if ever, possible to determine that a statement is, for example, actionably defamatory based only on the statement itself. As the Fourth Circuit held in Zeran when it rejected a plaintiff’s distributor liability claim, imposing liability based on notice

would require a careful yet rapid investigation of the circumstances surrounding the posted information, a legal judgment concerning the information’s defamatory character, and an on-the-spot editorial decision whether to risk liability by allowing continued publication of that information.

Zeran, 129 F.3d at 333.

The District of Arizona noted that “[t]he sheer number of internet postings, perhaps combined with the anonymity of many contributors, makes this unworkable for website operators, and the incentive would be simply to remove all questionable content.” Glob. Royalties, 544 F. Supp. 2d at 932. Worse, such a system could be abused to force the removal of constitutionally protected speech through the use of fraudulent notices that platforms cannot verify. This has been observed to be an issue with the notice-and-takedown system of the DMCA. See John Tehranian, The New ©ensorship, 101 Iowa L. Rev. 245, 272–74 (2015).

What if your client is accused of encouraging users to submit negative statements? Notably, a persuasive argument was made by amici to the Sixth Circuit in Jones v. Dirty World that holding a website responsible for encouraging content that is negative or offensive in a colloquial sense—but not actually illegal—could have a significant impact on the power of the Internet to inform the public on important issues. Per the court,

an encouragement test would inflate the meaning of “development” to the point of eclipsing the immunity from publisher-liability that Congress established. Many websites not only allow but also actively invite and encourage users to post particular types of content. Some of this content will be unwelcome to others—e.g., unfavorable reviews of consumer products and services, allegations of price gouging, complaints of fraud on consumers, reports of bed bugs, collections of cease-and-desist notices relating to online speech. And much of this content is commented upon by the website operators who make the forum available. Indeed, much of it is “adopted” by website operators, gathered into reports, and republished online. Under an encouragement test of development, these websites would lose the immunity under the CDA and be subject to hecklers’ suits aimed at the publisher.

Jones, 755 F.3d at 414.

Given that the website at issue in Jones was a particularly distasteful gossip site, the ability to hook into these broader concerns was critical. It should be noted that these arguments were not made before the district court, where the case was marked by an ongoing series of interlocutory rulings against the defendant before the case reached the Sixth Circuit on appeal of a $338,000 jury verdict.

And if the allegations at issue are complex enough that you might not be able to disentangle them on a motion to dismiss under section 230, do not be afraid to revisit your First Amendment defenses. Among other benefits, they might actually be a simpler road to dismissal; if a public official fails to allege actual malice in a defamation action, or if a statement at issue is plainly one of opinion, a court could kick out the case even if your client’s role in creating the content is unclear. More importantly, section 230 and First Amendment defenses are supposed to function symbiotically; section 230 is intended to protect freedom of expression on the Internet, even if it is the expression of your users rather than your client’s own content. Asserting both defenses can allow you to bring the weight of constitutional principle into cases with difficult facts.

An example of this being done successfully can be found in a trio of cases brought by classified advertisements website Several years ago, individual states began enacting statutes targeting commercial sexual abuse of minors, which would have rendered Backpage criminally liable for advertisements posted by its users. In each case, Backpage filed suit and successfully argued that the state law was both overbroad under the First Amendment and preempted by section 230. See, LLC v. Cooper, 939 F. Supp. 2d 805 (M.D. Tenn. 2013);, LLC v. McKenna, 881 F. Supp. 2d 1262 (W.D. Wash. 2012);, LLC v. Hoffman, No. 13-cv-03952 (D.N.J. Aug. 20, 2013). And more recently, the combined effect of the First Amendment and section 230 resulted in the dismissal of criminal charges in California against three of Backpage’s executives for “pimping.” See People v. Ferrer, No. 16FE019224 (Cal. Super. Ct. Dec. 9, 2016).

It can also help if your policy arguments are supported by an amicus curiae. There are any number of organizations that watch section 230 cases specifically for dangerous precedents that could impair the structure of online communication. These include not only major online companies but also nonprofit organizations such as the Electronic Frontier Foundation and industry groups such as the Internet Association. Especially on appeal, organizations such as these may be interested in getting involved in a case that raises the concerns described above.


Section 230 is a powerful tool to protect digital platforms, freeing them from responsibility for much of what transpires across their services. However, section 230 is not a magic wand that insulates platforms from litigating cases. Depending on the specific nature of the plaintiff’s allegations, claims that are ultimately defeated on the basis of section 230 might nevertheless survive to summary judgment or beyond. Understanding the different theories of liability that a plaintiff might assert, and how those theories intersect section 230 jurisprudence and policies behind the law, will assist defense counsel to recognize the most likely procedural path to victory and to design their case accordingly.

Jeff Hermes

The author is deputy director at the Media Law Resource Center, New York City.