chevron-down Created with Sketch Beta.
June 03, 2024 HUMAN RIGHTS

Technology Can Monitor Human Rights Violations and Bring Perpetrators to Justice

By Alexa Koenig

In the video, a young man, later identified as John Sanders, strolls down a sidewalk in front of the Justice Department in Cleveland, Ohio. It’s late spring 2020, and pockets of the United States are exploding in protest. Days before, a Black man, George Floyd, was murdered by white police officer Derek Chauvin in Minneapolis, and communities from coast to coast—including Cleveland—have become sites of both peaceful and violent protest. 

Digital technologies can be used to automate and speed up the detection, information collection, and communication of various legal phenomena.

Digital technologies can be used to automate and speed up the detection, information collection, and communication of various legal phenomena.


A camera documents Sanders walking alone. All is calm until he suddenly bends in half, clutching his face with his hands. He’s just been hit in the eye with a cloth filled with lead pellets, likely shot by police who can’t be seen but are sheltering inside the Justice Department. The shooting, unprovoked, will leave Sanders partially blind. But his will not be the only blinding captured on video that week. The Washington Post will document 11 more across the country that are the result of the use of “less lethal” weapons by police during a six-day period.

In the last 20 years, we have witnessed a radical change in the use of information and communication technologies, especially video, to capture and communicate information about human rights violations. The rise of the smartphone, expansion of social media as a repository of photo and video content, and growing accessibility of satellite imagery have increasingly helped to communicate the who, what, when, where, and how of atrocities worldwide.

Today, the law-related uses for sensor-generated data are multiple. In this article, I discuss three emerging ways in which digital technologies are being used to monitor and analyze human rights violations and bring perpetrators to justice. These include the increasing deployment of (1) digital open-source information (public-facing social media content as well as other information found online); (2) digital reconstructions of sites where human rights violations have occurred, which are proving to be powerful communication tools in court; and (3) generative artificial intelligence, which is beginning to be used for everything from identifying relevant online information to analysis and reporting.

Digital Open-Source Information

Today, sensors are ubiquitous—from the smartphones in people’s purses and pockets to security cameras mounted on buildings, to satellites circling the skies—cameras are constantly generating data about human activities, including information relevant to human rights abuses. Many of these sensor-derived data points may come to have social or legal relevance, like the cameras that captured the blinding of John Sanders.

Decades ago, having visual information related to an event may have depended on a reporter being present; today, at least in some communities, just about every citizen may have a smartphone that they carry with them. When abuses break out, there may be multiple videos of an event, allowing for triangulation of multiple sources. Once, lawyers might have been fortunate to have a single photo or video of an event; today, the challenge may be how to parse an overwhelming volume of potentially relevant data.

Such sensors are not only helpful for documentation of abuses, however, but can be useful for monitoring rapidly unfolding events. During the presidential election of 2020, open-source researchers used online information to scan for outbreaks of violence and track protests. Sources included not only videos, photos, and text posted to an array of social media sites but also police scanners and government updates. Human monitors could share their findings with reporters who could get on the ground, turning their attention and resources toward potentially problematic hot spots.

Such digital content can, of course, also become helpful downstream as evidence for courts. In 2017, a video posted to Facebook showed a leader of the Al-Saiqa brigade in Libya participating in the extrajudicial killing of multiple people. That video, and several posted alongside it, became critical information to support an International Criminal Court arrest warrant. The videos and photographs pulled from online public spaces can serve as crucial complements to the “closed source” photos and videos shared directly with legal teams and/or captured by investigators for court-related purposes. 

Digital Reconstructions

Another major development has been the virtual reconstruction of crime scenes—and in one case, an entire city—to tell a fuller story of how a series of atrocities unfolded. In the international criminal case Prosecutor v. Al Hassan, the design/build firm SITU Research created a 3D reconstruction of Timbuktu, Mali. They embedded the visual evidence from the case—including both open- and closed-source videos and photos—into the reconstruction. Lawyers and judges could peruse the digital replica of the city and “turn a corner” to come across where a video had been shot and play that video, learning how the events were situated in physical space. Prosecutors also used satellite imagery to demonstrate that the destruction of cultural heritage property and other potential crimes in Timbuktu was systematic and widespread—critical elements for establishing that the defendant had played a role in the perpetration of crimes against humanity.

In addition to connecting disparate pieces of evidence or telling a story, organizations like SITU Research and Forensic Architecture have been exploring how reconstructions may be useful for generating testimonial evidence. For example, witnesses may be able to better communicate with lawyers, judges, and juries when they digitally “walk” them through the simulation of a crime scene and explain what they experienced at various sites; pretrial, the virtual environment may help jog memories or minimize gaps in understanding.

While expensive to construct, and at risk of raising inequality of arms and due process concerns, such demonstratives will likely become increasingly common at the international level. Reconstructions may also start appearing more frequently in domestic courts, especially in highly complex cases with an abundance of visual evidence that must be situated in geographic space.

Generative Artificial Intelligence

Like these demonstratives, large language models (LLMs) and large multimodal models (LMMs), both forms of generative artificial intelligence, are also being explored for their ability to aid legal research and analysis and craft outputs for court. LLMs and LMMs are artificial intelligence-based systems trained on enormous datasets that are increasingly being used to help researchers find relevant data online, answer questions about that data, and identify helpful source materials. With a little bit of coding ability and incredibly literal instructions, LLMs such as ChatGPT, Bard, and LLaMa can be used to source information online, organize information into datasets, compile findings into reports, and even create visuals to help illustrate how human rights violations unfolded.

However, as has been increasingly discussed in the media, such systems are infamously prone to “hallucinations,” a romanticized term for making up sources and facts. This makes them dangerous for many potential legal uses. In May 2023, one U.S.-based lawyer grabbed global attention when he used ChatGPT to generate a legal brief that was later found to be riddled with made-up cases and otherwise inaccurate information. In a study released in January 2024, researchers found that “legal hallucinations are alarmingly prevalent” when LLMs are used by lawyers for legal research and analysis, “occurring between 69% of the time with ChatGPT 3.5” and 88 percent of the time with LLaMa 2. 

Various jurisdictions are now scrambling to develop and/or offer guidance on the appropriate use of generative artificial intelligence as an increasing number of attorneys find themselves sanctioned for using LLMs irresponsibly in their legal research, writing, and analysis. In December 2023, England’s Courts and Tribunals Judiciary issued a determination that lawyers could use artificial intelligence to aid drafting but cautioned using LLMs for legal research and analysis.

Despite these concerns, researchers and advocates are continuing to explore the many ways in which these systems can be used to automate and thus speed up the detection, information collection, and communication of various legal phenomena, including human rights violations. Such systems may be used to more quickly identify potentially relevant information online, organize that data, and lead to potential evidence.


Despite the power of digital sources to enable monitoring, fact-finding, and evidence-gathering, law schools have remained relatively consistent in how they teach law students. With few exceptions, such institutions seem to be paying relatively little attention to the ways in which social media, satellite imagery, and other digital open sources may be used to make sense of what is happening in the world. While students continue to mine Westlaw, Lexis, or HeinOnline for relevant sources, the use of other online information remains largely outside of the purview of legal academia. Law schools are arguably lagging in their ability to embrace the digital universe of facts that are potentially at their fingertips.

Despite this, several relatively recent developments may help speed up the dissemination of methods. For example, in 2020, UC Berkeley Law’s Human Rights Center and the United Nations Office of the High Commissioner for Human Rights released the Berkeley Protocol on Digital Open-Source Investigations. The document provides international guidance on the “effective use” of digital online information to investigate human rights violations, as well as violations of humanitarian and international criminal law. In early 2024, that document will be officially launched when released in all of the languages of the United Nations. And higher education is beginning to create formal programs to disseminate relevant skills, as evidenced by Berkeley Law, UCLA, UC Santa Cruz, the University of Essex, Utrecht Academy, the University of Pretoria, and others. As part of the digital open-source investigations movement, these institutions and others are increasingly teaching their students how digital technologies can be useful not just for organizing evidence but also for gathering facts, analyzing information, and even identifying potential cases in the first place. 

Alexa Koenig, JD, MA, PhD

Co-Faculty Director, UC Berkeley Law’s Human Rights Center

Alexa Koenig, JD, MA, PhD, is co-faculty director of UC Berkeley Law’s Human Rights Center, a research professor at Berkeley Law, and a lecturer at Berkeley Journalism’s Investigative Reporting Program.