chevron-down Created with Sketch Beta.

Just Resolutions

November 2023 - Public Disputes & Consensus Building Committee

A Path to Social Justice with AI

Kassi Richey Burns

Summary

  • ChatGPT and other Generative Pretrained Transformer (GPT) tools raise ethical concerns related to algorithmic bias, data privacy, automation bias, and other issues.
  • INTERPOL and the United Nations Interregional Crime and Justice Research Institute (UNICRI) published a Toolkit for Responsible AI Innovation in Law Enforcement, which includes five core principles as its foundation.
  • Over the years, the U.S. Government Accountability Office has released reports assessing the use of facial recognition tools.
  • SIID Technologies integrates AI tools to promote fairness and equality in law enforcement and legal communities.
A Path to Social Justice with AI
zhengshun tang via Getty Images

Jump to:

Artificial Intelligence (“AI”) tools have been used in both the investigation and prosecution stages of criminal justice for some time now. During the past year, the topic of AI has faced increased attention and scrutiny by the public and regulators, resulting in requirements that these tools be used in an ethical way to protect core human rights. The flipside is that we can look to how AI can be used for good, as they become increasingly robust and accessible, creating an opportunity for marginalized communities to leverage them in the protection of their own rights.

Investigation: Retraining AI for Social Justice

The Coming AI Regulations

The launch of OpenAI’s ChapGPT 3.5 in November 2022 captured the attention of the world. A chatbot trained on Large Language Models that leveraged Natural Language Processing, to take prompts from users and create new content was a transformative move forward in our engagement with AI tools. Elevated beyond the usual tasks of categorization and classification, ChatGPT and other GPT (Generative Pretrained Transformer) tools generate content that could be mistaken for human-created content. This, in combination with other issues that have already been raised about the ethical use of AI tools, such as algorithmic bias, lack of data diversity, lack of validation, data privacy concerns, and automation bias, resulted in swift attention from various governing bodies.

Perhaps the first amongst those governing bodies to act was the European Union (“EU”). In June 2023, Members of the European Parliament adopted their position on an EU AI Act with the goal that the final form of the law will be negotiated and agreed upon by the Council at year’s end. While Generative AI largely acted as the catalyst for regulatory scrutiny, most proposals, if not all, take a broader approach to what should be regulated. Not only is Generative AI being addressed, but other forms of AI that have been in use for some have captured the attention of lawmakers. For example, AI systems considered “high risk” by the EU AI Act (requiring registration with the EU and ongoing assessment obligations) includes those created for law enforcement given the potential impact to human rights.

INTERPOL’s AI Toolkit

Similar to the EU AI Act, INTERPOL has focused its attention on the responsible use of AI by law enforcement for a number of years. In 2018, INTERPOL began its collaboration with UNICRI (United Nations Interregional Crime and Justice Research Institute) with their inaugural Global Meeting on AI for Law Enforcement, resulting in their 2019 report, AI for Law Enforcement. In June 2023, INTERPOL-UNICRI published their Toolkit for Responsible AI Innovation in Law Enforcement (“AI Toolkit”), which acknowledges the benefit law enforcement can gain by using AI tools and how that use must be balanced with risk assessments related to human rights and the ethical impact through that use. The AI Toolkit focuses on five core “Principles for Responsible AI Innovation” as its foundation: (1) Lawfulness, (2) Minimization of Harm, (3) Human Autonomy, (4) Fairness, and (5) Good Governance and includes a risk assessment questionnaire for law enforcement agencies to evaluate their AI systems.

US Law Enforcement - Retraining AI Models

AI-based tools have been used for some time by law enforcement, including facial recognition software. Over the years, the U.S. Government Accountability Office (“GAO”) has released several reports on the use of facial recognition tools, including a study in 2020 that highlighted the privacy risks and issues with bias due to poor accuracy when analyzing certain demographics. In its most recent report published in September 2023, the GAO shared its assessment on seven U.S. law enforcement agencies, their use of facial recognition AI software, and the policies and training systems they had in place to ensure sufficient safeguards are in place to protect civil rights and civil liberties. Of those seven law enforcement agencies, only two had instituted training programs related to the proper use of AI tools, prompting the GAO to recommend training on the use of facial recognition software for the remaining agencies.

Prosecution: Challenges to Social Justice

Accessibility to Resources

A bedrock to the U.S. criminal justice system is the defendant’s right to counsel. Initially established for federal criminal defendants through the Sixth Amendment, Gideon v. Wainright pushed that right to state criminal defendants through the application of the Fourteenth Amendment’s right to due process. Gideon v. Wainright 372 U.S. 335 (1963). Over the years, public defenders’ offices have been increasingly challenged by diminishing resources and growing demand, at both the federal and state levels. Existing guidelines for public defense attorneys provide recommendations on the amount of time they should spend on their assigned matters: 13.9 hours for felony cases and 5.2 hours for misdemeanor cases. Even with recently recommended updates to increase those hours taking into account expanding data sources of relevant evidence, it still means most public defense counsel are left to balance an overly demanding caseload.

Emergence of Accessible AI Tools

Erie O’Diah founded her legal tech startup SIID (standing for social impact identification) as a way for law enforcement and legal communities to use AI while also keeping in mind racial equality and fairness. Originally meant to be software for marketing professionals to identify instance of unconscious bias in their content, O’Diah was motivated to pivot towards the legal world after the events surrounding George Floyd’s murder in Minneapolis. Her hope is that SIID can be a resource for both law enforcement, prosecutors, and public defenders to help manage stressed criminal dockets that have become increasingly backlogged since COVID.

SIID integrates various AI tools, including natural language processing (NLP) and sentiment analysis to quickly analyze evidence and highlight key elements that can be prioritized in review. No complex processing is required to use this software. A user can simply upload a media file to SIID for it to be analyzed by its AI tools. The ability to efficiently upload and analyze evidence, in particular video evidence such as bodycams with potentially long run times, can mean the difference between a plea deal and acquittal for some criminal defendants. Such a tool can give public defenders the opportunity to quickly identify clients with exculpatory evidence early on in a case.

Conclusion

Much like the scales of justice, it is vitally important that any use of AI-based tool is implemented in a balanced and fair approach. Given the potential for algorithmic bias, automation bias, and scalability, any use of AI tools by law enforcement without proper validation and training can result in a harmful impact to human rights. That concern is balanced by the power of increasingly affordable AI tools used for evidence analysis can have to overburdened public defenders and marginalized communities. Both sides should have sufficient access to these tools and guidelines on their ethical use to better balance those scales for social justice.

    Author