chevron-down Created with Sketch Beta.

Invisible Threats: Mitigating the Risk of Violence from Online Hate Speech against Human Rights Defenders


Social media companies (SMCs) provide a platform for human rights defenders to share information and express opinions. At the same time, they are increasingly being used to target and harass human rights defenders, including journalists, environmental activists and lawyers.

Systematic, state-aligned campaigns to denigrate and indirectly threaten human rights defenders and marginalized communities contribute to a climate of violence and impunity that increases the risk of real-world violence against human rights defenders. The combination of the volume and visibility of such speech is particularly toxic in countries, such as Guatemala, where there is political persecution of human rights defenders and systemic and historical impunity for violence against human rights defenders and marginalized populations.

To address these challenges, the ABA Center for Human Rights asked the International Human Rights and International Law Clinic at the University of Connecticut School of Law to examine whether the content moderation policies of SMCs are sufficient to mitigate the risk of violence from online hate speech against human rights defenders in Guatemala. This study focused on the situation in Guatemala to illustrate the need for more targeted content moderation policies and practices.

The study found that greater attention can and should be paid to three dimensions of hate speech that contributes to violence against human rights defenders in Guatemala: the contextual meaning of speech, the use of social media by state-aligned actors to engage in political persecution of activists, and the measures used to identify potentially harmful speech. In addition to the specific recommendations below, the researchers encouraged SMCs to adopt the Santa Clara Principles in order to improve accountability and transparency in their content moderation policies and practices.

Based on the study’s findings, the researchers put forth the following recommendations for social media companies to take in order to mitigate the risk of violence from content on their platforms, in line with international human rights and criminal law.

Recommendation One: SMCs should, as a temporary measure, include human rights defenders as a protected category under their hate speech/harmful content policies in countries where defenders face persecution by the state or are not protected by the government from violent retaliation for their advocacy, i.e., where the SMC makes findings that defenders are the targets of systematic persecution in that country, and that the persecution is being carried out by or tolerated by the government.

Recommendation Two: SMCs should establish additional review procedures that will allow them to take better account of coded threats that contributes to a climate of violence but which may not necessarily constitute a direct personal threat by (1) providing heightened scrutiny of content in problematic or sensitive countries; (2) engaging localized personnel and guidance; (3) considering the impact of the speaker in evaluating the effect of speech on their platforms; and (4) improving flagging processes to facilitate the gathering of context-specific information by

  1.  utilizing verified users as endorsed content moderators;
  2.   creating and implementing online and social media literacy training programs; and
  3.    creating transparent appeals processes for challenging decisions to remove, or refusals to remove, flagged content.

SMCs should provide heightened scrutiny based upon a review of relevant risk factors drawn from the social science of incitement to violence, including but not limited to:

  • “a history of intergroup conflict between the in-group and out-group” or an overall increase in the “number of instances of inter-group violence” in the past twelve months;
  •   “a major national political election in the next 12 months”; and
  • “significant polarization of political parties along religious, ethnic, or racial lines”.

Implemented carefully, these recommendations could help ensure these companies deliver on their promise of providing a platform for open debate without inadvertently enabling the denigration of vulnerable populations and the silencing of their greatest defenders.

View Full Report (English) (Spanish)