Who Decides? Civility v. Hate Speech on the Internet

by

In September, an anti-Mohammad video posted on YouTube set the mid­dle east ablaze. After days of rioting in Egypt and Libya, which claimed the life of an American ambassador, the U.S. government asked Google, which owns YouTube, to remove the video. Google refused to remove the video throughout the world, on the grounds that it didn’t violate YouTube’s hate speech policy, which bans speech that “attacks or demeans” a religious group, not a religious prophet or leader.

But Google “temporarily” blocked access to “Innocence of Muslims” in Libya and Egypt, owing to what Google called “the very difficult situation in [those countries].” There were then speeches at the United Nations by the presidents of the United States and Egypt defending two very different views of free expression—the Ameri­can view, which says that speech can only be banned when it threatens to provoke imminent violence, and the European and Islamic view, which says that speech can be banned when it deni­grates or demeans the social standing of a group based on religion or ethnicity.

The incident confirmed a lesson that is transforming our global debates about free speech: today, lawyers at Google, YouTube, Facebook, and Twit­ter have more power over who can speak and who can be heard than any president, judge, or monarch. As a pri­vate company, Google is not bound by the First Amendment, although it does pledge to enforce national laws within individual countries. (For this reason, Google removed the anti-Mohammed video in India after concluding that it clearly violated Indian hate speech laws.) To evaluate the regulation of hate speech today, we need to understand not only the very different laws on the books regulating hate speech around the globe and the enforcement policies of the governments in question, but also the definitions of hate speech adopted by the leading Internet service provid­ers and, most importantly, how they are enforced and by whom.

The social media service providers have generally adopted European-style definitions that they are enforcing in the American tradition. But they are doing this without transparency or oversight, entrusting these decisions to user flags that are reviewed by lawyers and policy staffers, who combine the function of legislators, judges, and executive offi­cials. Although these lawyers are doing an impressive job under the circum­stances, more transparency and pre­dictability in these policies might serve constitutional values around the world.

The European and American free speech traditions are very different. The European tradition allows the regula­tion of group libel—speech that deni­grates the dignity of a group or lowers its standing in society. In America, by contrast, the First Amendment to the U.S. Constitution has been interpreted to forbid the regulation of hate speech unless it threatens to provoke an immi­nent violent response and is likely to do so.

The American debate over hate speech was framed during the Found­ing era in the debate over constitution­ality of laws against seditious libel. The Sedition Act of 1798 made it a crime to publish “false, scandalous or malicious writings against the government of the United States, including the president or Congress, its officials. . . . With intent to bring them into contempt or disrepute or to excite against them the hatred of the good people of the United States.” It was criticized as unconstitutional by Madison and Jefferson in the Virginia and Kentucky Resolutions, and the gov­ernment’s power to ban seditious libel was formally repudiated in New York Times v. Sullivan in 1964: “The central meaning of the First Amendment,” the Court held, is that seditious libel can­not be made the subject of government sanction. Instead, a public official suing for defamation has to prove actual malice—actual knowledge of falsity or reckless lack of investigation.

As for speech directed at private parties or groups, the Court resolved the tension in favor of speech in Bran­denburg v. Ohio, in 1969. The case involved the conviction of a Ku Klux Klan leader for violating an Ohio stat­ute that prohibited advocating “crime, sabotage, violence or unlawful methods of terrorism as a means of accomplish­ing industrial or political reform.” A Klan leader was convicted after making racist remarks about “sending the Jews back to Israel.” The Court struck down the law, holding that “The constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or pro­ducing imminent lawless action and is likely to incite or produce such action.”

In other words, mere advocacy of hate cannot be banned unless it is an incitement to imminent lawless action that’s likely to succeed. The result is that hate speech in the United States is very hard for the government to regulate. Europe regulates group libel, America doesn’t.

But the social media service pro­viders are free to adopt whatever defi­nition of hate speech they choose. And which view do Internet service provid­ers adopt? Generally, they adopt a mix of the two—European style enforced in an American way.

Consider the hate speech policy of YouTube. On YouTube, Google’s policy is to remove [YouTube] content only if it is hate speech, violating its terms of ser­vice, or if it is responding to valid court orders or government requests.” Under YouTube’s terms of service, hate speech is “speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status, and sexual orientation/gender identity . . . . Sometimes there is a fine line between what is and what is not considered hate speech. For instance, it is generally okay to criticize a nation, but not okay to make insulting gener­alizations about people of a particular nationality.”

This sounds European. But when it comes to deciding whether a video can be banned as hate speech or pro­moting terrorism or must be protected as hateful religious or political speech, YouTube aims “to draw a careful line between enabling free expression and religious speech, while prohibit­ing content that incites violence.” This sounds American. And Google refused to remove “Innocence of Muslims” from YouTube completely because it “mocks Islam but not Muslim people” and there­fore does not constitute hate speech under YouTube’s policy.

YouTube relies on users to flag inappropriate videos to be reviewed by its employees. The reviewers determine if the videos contain nudity, animal abuse, hate speech; incite violence; or “promote terrorism.” And for years, the ultimate decision maker over contro­versial speech at YouTube was Nicole Wong, who has since left the com­pany. Her colleagues called her “The Decider,” because she had the power to decide what to remove and what to leave up on Google and YouTube. She chose to remove content that clearly violates local laws or YouTube’s hate speech policy, but, in ambiguous cases, to leave up postings unless they clearly incited violence. For example, when a Turkish judge ordered the country’s telecom providers to block access to Google in response to videos that insult­ed the founder of modern Turkey, Mus­tafa Kemal Ataturk, which is a crime under Turkish law, Wong decided which videos were illegal in Turkey, which videos violated YouTube’s terms of ser­vice banning hate speech, and which videos should be protected speech. She decided to remove videos that vio­lated Turkish law, but only in Turkey, and when a Turkish judge demanded that Google block access to the videos throughout the world, she refused. As a result, the Turkish government blocked access to YouTube in Turkey.

What about other Internet ser­vice providers? According to the New York Times, “Facebook has some of the industry’s strictest rules. Terrorist organizations are not permitted on the social network, according to the company’s terms of service. In recent years, the company has repeatedly shut down fan pages set up by Hezbol­lah. In a statement after the killings of United States Embassy employees in Libya, the company said, ‘Facebook’s policy prohibits content that threatens or organizes violence, or praises violent organizations.’”

The policy Facebook was referring to in that statement reads: “You may not credibly threaten to harm others, or organize acts of real-world violence. We remove content and may escalate to law enforcement when we perceive a genuine risk of physical harm, or a direct threat to public safety. We also prohibit promoting, planning or cele­brating any of your actions if they have, or could, result in financial harm to oth­ers, including theft and vandalism.” That sounds American.

But Facebook also bans “hate speech,” which it defines as “to attack a person based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medi­cal condition.”5 Facebook “do[es], how­ever, allow clear attempts at humor or satire that might otherwise be consid­ered a possible threat or attack. This includes content that many people may find to be in bad taste.”6 That sounds more European.

A recent controversy regarding Sarah Palin and Facebook illustrates how Facebook’s system can go inadver­tently wrong because of lack of human review. During the debate over the con­struction of an Islamic cultural center not far from Ground Zero in Manhat­tan, Sarah Palin posted a note on Face­book responding to a statement by New York’s Mayor Bloomberg supporting the center. Palin’s note criticized the Islamic cleric who would lead the cul­tural center for what she classified as extremist views and said a project sup­ported by such a person should not be approved. In response, some Internet commenters reported Palin’s comment to Facebook as hate speech and urged others to do so.7 Facebook deleted the post. Facebook later released a state­ment apologizing and claiming that “the note in question did not violate our content standards but was removed by an automated system.”8 This raises the important question of to what degree of due diligence Internet platforms like Facebook believe it is necessary to oversee their censorship mechanisms and practices.

Finally, consider Twitter, which has the most American definition of free speech. Unlike Google and Facebook, Twitter does not explicitly address hate speech, but it says in its rule book that “users are allowed to post content, including potentially inflammatory con­tent, provided they do not violate the Twitter Terms of Service and Rules.” Those include a prohibition against “direct, specific threats of violence against others.”9 Twitter bans particu­lar tweets that are illegal in individual countries but refuses to remove entire hash tags on the grounds that they might offend the dignity of a group.

By examining how hate speech is regulated, in practice on the Internet, we can see that the definition of hate speech is just as important as how a par­ticular policy is enforced. Even a Euro­pean definition can be enforced with sensitivity to civil liberties, and a more American one can still be invoked to ban more speech than the First Amend­ment should forbid. Still, threats to sup­press speech multiply from around the world: the European Union has recently proposed the creation of a “right to be forgotten” that would allow users to demand the removal, not only of mate­rial they post about themselves, but even truthful but embarrassing com­ments about them by others, unless the comments serve a public interest, in the judgment of a privacy commissioner. If Google and Facebook refuse to remove the content, they would be liable for up to 2 percent of their annual revenue of more than $50 billion. That would rep­resent over $1 billion in potential liabil­ity. This would increase pressure on the Google and Facebook Deciders to play the role of private judges, review­ing content in advance and removing even speech that should be protected in order to avoid legal liability.

All of this suggests that the hate speech policies of the Deciders should be narrowed to look more American than their current incarnations. The definitions should be clear enough that they can be enforced by a combination of user flagging and corporate review. And they should look more like the Twitter definition, banning speech that is intended to threaten and incite vio­lence against individuals or groups. The controversy over the anti-Mohammed videos shows how hard it is to evaluate in advance whether or not a particular video is or isn’t likely to cause violence and the dangers of allowing a heckler’s veto to shut down speech based on speculations about the reaction it might provoke. For this reason, rather than moving toward a more European defini­tion that allows the regulation of hate speech based on definitions of dignity that provoke widespread disagreement, it makes more sense for the Deciders to move toward an American definition that can be enforced as quickly and con­sistently as possible.

Jeffrey Rosen is a professor of law at The George Washington University and the legal affairs editor of The New Republic. His most recent book is The Supreme Court: The Personalities and Rivalries that Defined America.

 

Advertisement

  • Special Offers

  • Additional Resources

 

Subscribe Now!

A one year subscription to Insights on Law & Society costs $34 and includes three issues of the print magazine, as well as access to additional online resources.

Issues are also sold separately at $10 each (discounts for bulk orders), for those interested only in specific topics covered by issues of the magazine.

To subscribe, please call Customer Service at (800) 285-2221, email the editor or mail us a completed subscription form.

__________

 

Insights on Law and Society is edited by Tiffany Middleton. She can be reached at tiffany.middleton@americanbar.org.