chevron-down Created with Sketch Beta.

Litigation News

Litigation News | 2021

Twitter Not Liable for Defamatory Posts

Jonathan W Lounsberry

Summary

  • Social media companies are immune from liability for individual users’ posts under CDA unless they materially contribute to statements.
  • A federal court recently dismissed a defamation claim under the Communications Decency Act.
  • The decision may afford social media companies an immunity from policing user content.
Twitter Not Liable for Defamatory Posts
RichVintage via Getty Images

Jump to:

Social media companies are not liable for defamatory messages users post on their platforms. In Brikman v. Twitter, Inc., a federal court dismissed a defamation claim under the Communications Decency Act, holding that an interactive computer service could only be liable if it materially contributed to the unlawful statements. ABA Litigation Section leaders say the decision may afford social media companies an immunity from policing user content.

Harassing Tweets Lead to Defamation Suit Against Twitter

The plaintiff, a rabbi of a Brooklyn synagogue, alleged that someone impersonating the synagogue was making defamatory, offensive, and harassing posts on Twitter and demanded the posts to be removed. Twitter refused because the posts did not violate its rules and policies. The plaintiff sued, alleging that Twitter aided and abetted the defamation by hosting and refusing to take down the offending posts. Twitter moved to dismiss.

The U.S. District Court for the Eastern District of New York granted Twitter’s motion, determining that it was immune from the plaintiff’s claims under the pursuant to the Communications Decency Act (CDA).

Court Finds Twitter Immune under CDA

To qualify for immunity under the CDA, (1) the defendant must be a provider or user of an interactive computer service, (2) the claim must be based on information provided by another information content provider, and (3) the claim would treat the defendant as the publisher or speaker of that information. The court found all three factors present.

First, the court first determined that Twitter qualified as an interactive computer service. It cited the CDA, which defines an “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” The court stated that “Twitter is an online platform that allows multiple users to access and share the content hosted on its servers” and, thus, was an interactive computer service.

Next, the court found the plaintiff’s claims were based on information provided by another information content provider, not Twitter. The court emphasized that the CDA defined an “information content provider” as any person or entity that is responsible for the “creation or development” of information provided through the internet. Twitter did not play any role in the creation or development of the challenged tweets and thus was not an information content provider.

Finally, the court concluded that the plaintiff’s claims sought to hold Twitter liable as the publisher of information provided by another information content provider. The plaintiff argued that Twitter published the information because it hosted the tweets on its platform and refused to remove the tweets when the plaintiff reported them. The court rejected this argument, noting that such an interpretation would “eviscerate” section 230 of the CDA because it would hold Twitter liable for organizing and displaying content provided by third parties. To impose liability on Twitter, the court reasoned, Twitter must have “directly and materially contributed to what made the content itself unlawful.” The plaintiff did not allege that Twitter contributed to the defamatory content of the individual user’s tweets.

Does Immunity Encourage Bad Actors or Web Interactivity?

Litigation Section leaders note that there may be significant policy implications by granting a company like Twitter protection from defamation liability. “Traditionally, this type of laissez-faire approach has the potential to prompt either rampant misconduct and abuse of the freedom that interactive computer services are given or, on the other end of the spectrum, laudable self-regulation that adequately addresses the issues of concern,” explains Veronika Balbuzanova, Plantation, FL, member of the Section’s Privacy & Data Security Committee.

Further, “the downside is that the immunity can reduce incentives for platforms to police extreme user content, and it can backstop a bad actor’s business model. Sites that traffic in extreme user content rely on the CDA to escape liability for that content while making money from it,” notes Jonathan Peters, Athens, GA, chair of the First Amendment Subcommittee of the Section’s Civil Rights Litigation Committee. “But the upside, which I think is much more significant, is that the immunity has enabled web interactivity to flourish on sites as diverse as YouTube, Craigslist, and Yelp. We would not have the level of interactivity that we have today without section 230,” he adds.

This begs the question whether the immunity afforded to an interactive computer service affects the truth and accuracy of posts made on their services. “The Brikman court’s interpretation of section 230(c)(1) of the CDA essentially provides that social media giants like Twitter, Facebook, Instagram, Snapchat, etc., have no legal duty to fact-check or verify the truth and accuracy of any content posted on their platforms,” notes Balbuzanova.

Thus, “the sites themselves are really in the driver's seat, regardless of what section 230 incentivizes, as they decide what type of content may be posted, under what conditions to remove it, how to display and prioritize it using algorithms, and so on,” concludes Peters.

Resources

    Author