Digital Reconstructions
Another major development has been the virtual reconstruction of crime scenes—and in one case, an entire city—to tell a fuller story of how a series of atrocities unfolded. In the international criminal case Prosecutor v. Al Hassan, the design/build firm SITU Research created a 3D reconstruction of Timbuktu, Mali. They embedded the visual evidence from the case—including both open- and closed-source videos and photos—into the reconstruction. Lawyers and judges could peruse the digital replica of the city and “turn a corner” to come across where a video had been shot and play that video, learning how the events were situated in physical space. Prosecutors also used satellite imagery to demonstrate that the destruction of cultural heritage property and other potential crimes in Timbuktu was systematic and widespread—critical elements for establishing that the defendant had played a role in the perpetration of crimes against humanity.
In addition to connecting disparate pieces of evidence or telling a story, organizations like SITU Research and Forensic Architecture have been exploring how reconstructions may be useful for generating testimonial evidence. For example, witnesses may be able to better communicate with lawyers, judges, and juries when they digitally “walk” them through the simulation of a crime scene and explain what they experienced at various sites; pretrial, the virtual environment may help jog memories or minimize gaps in understanding.
While expensive to construct, and at risk of raising inequality of arms and due process concerns, such demonstratives will likely become increasingly common at the international level. Reconstructions may also start appearing more frequently in domestic courts, especially in highly complex cases with an abundance of visual evidence that must be situated in geographic space.
Generative Artificial Intelligence
Like these demonstratives, large language models (LLMs) and large multimodal models (LMMs), both forms of generative artificial intelligence, are also being explored for their ability to aid legal research and analysis and craft outputs for court. LLMs and LMMs are artificial intelligence-based systems trained on enormous datasets that are increasingly being used to help researchers find relevant data online, answer questions about that data, and identify helpful source materials. With a little bit of coding ability and incredibly literal instructions, LLMs such as ChatGPT, Bard, and LLaMa can be used to source information online, organize information into datasets, compile findings into reports, and even create visuals to help illustrate how human rights violations unfolded.
However, as has been increasingly discussed in the media, such systems are infamously prone to “hallucinations,” a romanticized term for making up sources and facts. This makes them dangerous for many potential legal uses. In May 2023, one U.S.-based lawyer grabbed global attention when he used ChatGPT to generate a legal brief that was later found to be riddled with made-up cases and otherwise inaccurate information. In a study released in January 2024, researchers found that “legal hallucinations are alarmingly prevalent” when LLMs are used by lawyers for legal research and analysis, “occurring between 69% of the time with ChatGPT 3.5” and 88 percent of the time with LLaMa 2.
Various jurisdictions are now scrambling to develop and/or offer guidance on the appropriate use of generative artificial intelligence as an increasing number of attorneys find themselves sanctioned for using LLMs irresponsibly in their legal research, writing, and analysis. In December 2023, England’s Courts and Tribunals Judiciary issued a determination that lawyers could use artificial intelligence to aid drafting but cautioned using LLMs for legal research and analysis.
Despite these concerns, researchers and advocates are continuing to explore the many ways in which these systems can be used to automate and thus speed up the detection, information collection, and communication of various legal phenomena, including human rights violations. Such systems may be used to more quickly identify potentially relevant information online, organize that data, and lead to potential evidence.
Conclusion
Despite the power of digital sources to enable monitoring, fact-finding, and evidence-gathering, law schools have remained relatively consistent in how they teach law students. With few exceptions, such institutions seem to be paying relatively little attention to the ways in which social media, satellite imagery, and other digital open sources may be used to make sense of what is happening in the world. While students continue to mine Westlaw, Lexis, or HeinOnline for relevant sources, the use of other online information remains largely outside of the purview of legal academia. Law schools are arguably lagging in their ability to embrace the digital universe of facts that are potentially at their fingertips.
Despite this, several relatively recent developments may help speed up the dissemination of methods. For example, in 2020, UC Berkeley Law’s Human Rights Center and the United Nations Office of the High Commissioner for Human Rights released the Berkeley Protocol on Digital Open-Source Investigations. The document provides international guidance on the “effective use” of digital online information to investigate human rights violations, as well as violations of humanitarian and international criminal law. In early 2024, that document will be officially launched when released in all of the languages of the United Nations. And higher education is beginning to create formal programs to disseminate relevant skills, as evidenced by Berkeley Law, UCLA, UC Santa Cruz, the University of Essex, Utrecht Academy, the University of Pretoria, and others. As part of the digital open-source investigations movement, these institutions and others are increasingly teaching their students how digital technologies can be useful not just for organizing evidence but also for gathering facts, analyzing information, and even identifying potential cases in the first place.