chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo March/April 2025 (42:2): AI for Lawyers

AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process

Kimberly Russell

Summary

  • Mass surveillance and predictive policing powered by artificial intelligence (AI) test the Fourth Amendment’s protection against unreasonable searches.
  • Opaque, AI-driven risk assessments threaten the 14th Amendment’s guarantees of equal protection and due process.
  • While AI presents serious challenges, it also holds promise for leveling the playing field for defendants, improving public safety, and increasing access to justice.
  • Effective oversight should mandate algorithmic transparency, regular independent audits, and the incorporation of human judgment to mitigate biases.
AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process
simpson33/iStock via Getty Images Plus

Jump to:

In today’s digital era, artificial intelligence (AI) is transforming every facet of society—including the criminal justice system. AI promises greater efficiency in investigations and judicial proceedings by automating tasks and analyzing vast amounts of data. However, its unchecked use also raises serious concerns about due process rights and constitutional safeguards.

This article provides an overview of what AI is and how it is used in criminal law, examines the risks and failures associated with its deployment, and highlights the promise of improved public safety and increased access to justice. Finally, we discuss the urgent need for robust oversight—both domestically and internationally—to ensure that AI does not erode the fundamental rights guaranteed by the Constitution.

What Is AI and How Is It Generally Used in Criminal Law

“Can machines think?” English mathematician Alan Turing posed this question in his seminal 1950 paper, “ Computing Machinery and Intelligence.” In 1956, just six years after Turing’s paper, a group of scientists, mathematicians, and engineers convened at Dartmouth College, coined the term “artificial intelligence,” and laid the foundation for the entire field. Today, AI encompasses all technology and software that enable computers to simulate human intelligence—learning, comprehension, problem-solving, and decision-making.

The evolution from Turing’s theoretical question to modern systems such as ChatGPT spans more than 70 years. The initial discussions in the 1950s led to the development of machine learning in the 1980s, which in turn paved the way for deep learning and, more recently, generative AI (GenAI). Although headlines frequently focus on GenAI, every iteration plays a crucial role in modern technology.

At its core, AI is akin to legal analysis. It centers on two essential components: data and algorithms. For simplicity, think of “data” as the facts in a legal brief and “algorithm” as the law. An algorithm is a set of rules the computer follows, while data constitutes the information that the computer analyzes under those rules. Together, these components allow AI systems to analyze information and generate outputs, much like a lawyer or judge interprets facts by applying the law.

AI is now increasingly applied to every field of legal practice—including the criminal justice system. The technology is employed from investigations to adjudication. Current trends in criminal law include:

These applications of AI pose substantial threats to our due process rights. AI-powered mass surveillance and predictive policing test the Fourth Amendment’s protection against unreasonable searches, while opaque risk assessments threaten the 14th Amendment’s guarantees of equal protection and due process.

The Risks and Failures of AI in the Criminal Justice System

Although AI promises improved efficiency and consistency in criminal justice, it raises due process concerns, most notably bias in data-driven decision-making and errors resulting from flawed surveillance technologies.

Bias in Data and Outcomes: Risk Assessments and Predictive Policing

Risk assessment tools such as COMPAS are widely used to inform decisions regarding bail, sentencing, and parole. Touted as a “nationally validated assessment” by equivant Solutions, COMPAS is meant to support evidence-based decision-making in criminal justice. However, a landmark 2016 study by ProPublica revealed that COMPAS disproportionately misclassified black defendants as high risk compared to white defendants, even when they did not reoffend. White defendants, on the other hand, were more frequently labeled as low risk despite higher recidivism rates.

Critics such as the NAACP argue that using historical criminal data for predictive policing embeds preexisting racial biases into AI systems. Data shows that the Black community is disproportionately impacted by over-policing and discriminatory laws. The Vera Institute of Justice reports that black men constitute about 13 percent of the male population but account for roughly 35 percent of the incarcerated population, with one in three black men born today expected to face incarceration in his lifetime. Such disparities have led the American Bar Association to advocate for reforms that eliminate racial and ethnic bias in the justice system.

Notable Cases of AI Gone Wrong: Facial Recognition and Mass Surveillance

Beyond risk assessments, AI’s role in mass surveillance has also produced significant failures and concerns. In January 2025, The Washington Post reported that 15 police departments across 12 states were using facial recognition systems to make arrests without direct evidence linking suspects to crimes. In at least eight cases, this technology resulted in wrongful arrests, highlighting severe inaccuracies and privacy violations.

Facial recognition technology, a key element of AI-driven mass surveillance, is widely used by federal and state law enforcement agencies. For example, the Transportation Security Administration (TSA) employs facial recognition at airport checkpoints to verify traveler identities, deleting images immediately after verification. Yet, the widespread adoption of such systems raises concerns about data retention and potential misuse.

Another controversial example is Clearview AI, a private company that amassed billions of public images by scraping them from the Internet and social media. Clearview AI’s technology is banned in Canada and heavily regulated elsewhere. In 2022, the company settled a class action lawsuit in the United States and agreed to limit the sale of its facial recognition database to government agencies and financial institutions. Despite these restrictions, more than 3,100 law enforcement agencies—including the FBI and Department of Homeland Security—continue to rely on Clearview AI’s technology.

These issues—bias in risk assessments and failures in surveillance—underscore the potential for AI to undermine constitutional protections and exacerbate systemic inequalities, posing a threat to fairness and due process in criminal justice.

The Promise of AI and Improvements in the Criminal Justice System

While AI presents serious challenges, it also holds promise for leveling the playing field for defendants, improving public safety, and increasing access to justice.

AI Is Leveling the Playing Field for Defendants

The very same technology that police departments are using to arrest suspects can and should be used in criminal defense to level the playing field. Take the story of Andrew Grantt Conlyn, a man wrongfully charged with vehicular homicide. Conlyn faced 15 years in prison because police did not believe he was the passenger in a horrific crash. In March 2017, Conlyn climbed into his friend’s 1997 Ford Mustang. The friend was drunk, distraught, and pushing 100 miles per hour on a road with a 35-miles-per-hour speed limit. His friend hit a curb and lost control of the car, colliding with a light pole and three palm trees. The driver was ejected and died on the scene. A Good Samaritan rescued Conlyn from the passenger seat of the Mustang, which was on fire. The unnamed stranger told police that Conlyn was not driving, and that conversation was recorded on an officer’s body-worn camera. Despite this, Conlyn was charged with vehicular homicide because investigators did not believe the passenger survived the crash.

Conlyn’s legal team tried for years to find the man from the grainy body-worn footage. It wasn’t until they reached out to Clearview AI that they were able to identify the man. Because Conlyn’s lawyers were contract public defenders, they were able to utilize Clearview AI’s technology, locate the witness, and exonerate Conlyn from the charges.

AI Is Improving Public Safety

AI has become instrumental in combating online criminal activities such as child pornography and human trafficking. Advanced algorithms scan digital content across social media and online platforms to detect and flag illicit material.

Companies such as Facebook and Apple have developed sophisticated tools that analyze patterns and metadata to identify potential cases of child exploitation and trafficking. Facebook collaborates closely with law enforcement, while Apple employs on-device scanning technology in its iOS systems to detect child sexual abuse material (CSAM) without compromising user privacy, as detailed on its Child Safety page. Additionally, federal and state agencies use AI-powered tools to dismantle online criminal networks by rapidly sifting through millions of images and videos. These advancements enhance public safety by enabling more precise and timely investigations.

AI Is Improving Access to Justice

AI is also expanding access to justice by making legal services more affordable and efficient. Overburdened public defenders and prosecutors now use AI-driven tools to streamline case preparation and legal research. GenAI platforms can draft documents, summarize case law, and assist in building legal arguments—allowing attorneys to focus on substantive client advocacy.

By automating routine tasks, AI reduces overhead and potentially lowers the cost of private legal services, making defense against criminal allegations more accessible. The ABA’s articles “Law Bots: How AI Is Reshaping the Legal Profession” from Business Law Today and “When Legal Tech Comes of Age” from Law Technology Today highlight how AI is transforming legal practice. The ABA’s Judges Journal article “Artificial Intelligence Stepping into Our Courts: Scientific Reliability Gatekeeping of Risk Assessments” further underscores that when implemented responsibly, AI can make the justice system more efficient and equitable.

Protecting Due Process Rights from AI Through Oversight

The rapid integration of AI in criminal justice presents significant constitutional risks that require robust oversight to safeguard due process rights. Consider Atlanta, the most surveilled city in the United States, with 124 cameras per 1,000 people—placing it among the most surveilled cities globally. Over tens of thousands of cameras from both government and private sources contribute to programs such as Atlanta’s Operation Shield. Citizens are even encouraged to register home and business cameras with the police, ensuring that footage is readily available for investigations.

Pervasive mass surveillance challenges the Fourth Amendment’s protection against unreasonable searches and the 14th Amendment’s guarantee of equal protection and due process. It’s a constitutional battle that’s up against the global race for AI dominance.

China has been aggressively positioning itself as the world leader in AI since 2023, supported by significant state investments and policy initiatives. In 2025 alone, Silicon Valley–based tech giants are poised to invest around $300 billion in AI infrastructure. These same tech billionaires wield unprecedented influence over American government policies and stand to gain immense financial rewards from widespread AI deployment, including mass surveillance.

This competition resembles the nuclear arms race after World War II—a time when technological breakthroughs reshaped global security paradigms. However, the drive for AI supremacy can undermine due process rights if not adequately regulated.

Current Legal Challenges and Proposed Policies Around AI in the United States

Currently, oversight of AI in criminal investigations in the United States is fragmented. Neither the Department of Justice nor the Administrative Office of the Courts has established comprehensive policies governing AI’s use to protect due process rights. Instead, a patchwork of guidelines and local regulations exists, resulting in inconsistent protection of constitutional rights when AI tools—such as facial recognition systems and predictive policing algorithms—are deployed. This regulatory vacuum compels policymakers and judicial authorities to urgently update laws and introduce robust oversight measures that address AI’s rapid evolution while safeguarding fundamental protections.

Examples of Effective AI Oversight to Consider

International models offer promising frameworks for effective oversight. The EU AI Act employs a risk-based framework that classifies AI applications into four tiers, mandating stringent transparency and data security standards for high-risk systems. Similarly, Canada has implemented regulations for facial recognition technology that emphasize robust privacy protections and clear accountability measures.

Effective oversight in the United States could include several key strategies:

  • Regulating AI for fairness and transparency. Mandate that algorithms be auditable and that decision-making processes are clearly documented.
  • Human oversight and hybrid models. Incorporate human judgment alongside AI outputs, especially in high-stakes decisions such as risk assessments and predictive policing.
  • Bias mitigation strategies. Implement regular, independent validation studies and audits to identify and address biases within AI systems.

These measures are critical for ensuring that AI’s integration into criminal justice does not erode due process rights or reinforce systemic inequalities.

The Need for Regulatory Measures

Protecting due process rights in the age of AI demands a balanced approach to oversight that reconciles rapid technological innovation with constitutional safeguards. Mass surveillance combined with the global race for AI dominance exposes a significant regulatory gap in the United States. By drawing lessons from international frameworks—such as the EU AI Act’s risk-based approach and Canada’s privacy protections—and implementing clear, cohesive policies, policymakers can harness AI’s promise in criminal justice without sacrificing fundamental rights.

Effective oversight should mandate algorithmic transparency, regular independent audits, and the incorporation of human judgment to mitigate biases. Rigorous bias mitigation strategies are essential to prevent the reinforcement of systemic inequalities. Only with comprehensive and adaptive regulatory measures can we ensure that AI enhances public safety and access to justice while fully preserving the due process rights guaranteed by the Constitution.

    Author