chevron-down Created with Sketch Beta.

Criminal Justice Magazine

Winter 2025

Inhuman Reason: Predictive Policing Algorithms and the Fourth Amendment

Dominic Weiss

Summary

  • Several law enforcement agencies have used or are currently using artificial intelligence to process data on how crime varies geographically, looking at trends to inform where they police.
  • Beyond influencing where officers patrol, predictive policing algorithms can influence who officers ultimately search when the output of these algorithms is used as part of a probable cause analysis or other justification for conducting a search.
  • AI output is difficult to analogize to existing justifications for conducting searches, creating opacity around the constitutionality of its use, an issue that is exacerbated when AI is used with minimal oversight or to inform emergency decisions.
  • Increased access to the datasets that predictive policing algorithms use, coupled with specific policies ensuring appropriate use of AI output, can help to ensure that predictive policing algorithms do not take on an outsized importance and that human reason maintains its place in deciding which searchers will be conducted.
Inhuman Reason: Predictive Policing Algorithms and the Fourth Amendment
TatyanaMishchenko/iStock/Getty Images Plus

Jump to:

Artificial intelligence (AI) can harness the power of machine learning in ways that aid people in their daily tasks. This leads to gains in efficiency, as computers are able to perform these tasks in a fraction of the time that humans can. AI also has been touted as a way to improve accuracy, as machine learning can detect patterns in huge sets of data where the correlation would not be evident to even the most skilled statistician. However, these improvements come at a cost. By eliminating the human element from this work, we lose the benefit of human judgment. This is most salient with respect to reasoned explanations: For the vast majority of AI applications, both the data that feed them and the systems that run them are proprietary, meaning the reasoning behind their decisions is “blackboxed.” Further, while data scientists can explain how the models were trained, even these scientists cannot explain exactly how the AI is “reasoning,” and thus why it generates a certain outcome in any given situation.

Nevertheless, AI has been used in a wide array of law enforcement tasks. AI can be used in active crime scenes to detect car models, read license plates, and identify suspects. AI also can be used to parse evidence, matching fingerprints or DNA to a database. In such cases, the results the AI generates are easily verified by human experts: Either the system identified an input correctly or it did not. AI can additionally be used to help write incident reports and other statements, and it is regularly used in the recruitment and hiring process for officers, detectives, and other law enforcement officials. While these applications may pose their own ethical problems, their work product can still be easily double-checked and rejected if it is unreasonable. However, one area that is not so easily reviewed due to its emphasis on soft factors and its level of discretion is predictive algorithms used for policing.

Predictive Policing Algorithms

Predictive policing algorithms are fed historical crime data, which they then use to make predictions about what crimes are likely to happen, in what areas, and when. These algorithms are not limited to research contexts; they are actively being used or have been used in most states. Will Douglas Heaven, Predictive Policing Algorithms Are Racist, MIT Tech. Rev. (July 17, 2020). This includes some of the biggest police departments around the country, who have turned to AI in part as a response to their high crime rates and understaffing. For example, the Los Angeles Police Department has piloted using LASER to identify areas at higher risk of experiencing gun violence and PredPol to identify areas at higher risk of property-related crimes. Tim Lau, Predictive Policing Explained, Brennan Ctr. for Just. (Apr. 1, 2020). Additionally, predictive policing algorithms can give more granular recommendations than what areas should be policed more heavily, down to person-by-person recommendations on who is more likely to commit crimes or be victimized, such as the Chicago Police Department’s “heat list” of those at higher risk of being involved with gun violence. Id. Predictive policing thus increases surveillance not just on a community level, but on an individual level.

These algorithms have been released into a legal landscape that has not yet adapted to them. Very few laws exist to regulate their use, but this has been changing as public awareness of AI and its associated risks has risen. One recent development has come from the Office of Management and Budget (OMB), which has issued a policy designed to mitigate the risks of artificial intelligence in federal agencies’ use of AI. Press Release, The White House, Fact Sheet: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence (Mar. 28, 2024). However, the impact of this policy will likely be minimal, given that it does not apply to state and local agencies and that it creates large carveouts for national security.

Given the paucity of specific legal protections in this area, the Constitution may have to do significant heavy lifting in protecting people’s rights. In particular, Fourth Amendment limitations on searches may be able to check the expansion of this technology into unconstitutional areas. The power of the Fourth Amendment is especially notable given that courts often use it to place a “thumb on the scale in favor of judicial caution” with respect to new technology like AI. Orin S. Kerr, The Fourth Amendment and New Technologies: Constitutional Myths and the Case for Caution, 102 Mich. L. Rev. 801, 805 (2004). Indeed, the Supreme Court has reiterated time and again that the “progress of science” cannot be allowed to erode Fourth Amendment protections. Carpenter v. United States, 585 U.S. 296, 320 (2018) (citing Olmstead v. United States, 277 U.S. 438, 473–74 (1928)).

The Fourth Amendment and Robotic Reason

The “touchstone” of a Fourth Amendment analysis is reasonableness. Birchfield v. North Dakota, 579 U.S. 438, 477 (2016). Like many constitutional provisions, there are no hard and fast rules for when a search is reasonable and thus permissible. Instead, reasonableness is a totality of the circumstances test, weighing the “nature and quality” of the search against its importance. Cty. of L.A. v. Mendez, 581 U.S. 420, 427 (2017). In many cases, a reasonable search is one where the officer has received a warrant, requiring her to have demonstrated probable cause. Skinner v. Ry. Labor Executives’ Ass’n, 489 U.S. 602, 619–20 (1989).

To show probable cause, an officer must demonstrate that there is a substantial chance of uncovering criminal activity as a result of a search. Illinois v. Gates, 462 U.S. 213, 243 n.13 (1983). This is another totality of the circumstances test, where a judge will evaluate the specifics of a given case in a way that is flexible. Id. at 239. The framework around probable cause is intentionally and necessarily open to interpretation to allow for practical application in a way that aligns with common sense. United States v. Ventresca, 380 U.S. 102, 108 (1965). It is thus highly discretionary whether a judge will grant a search warrant in a particular case. This is one potential point of entry for AI, as the recommendations made by AI can be used as a factor in the flexible probable cause analysis. A judge may be more likely to grant a warrant to search an area or a person that AI has flagged as high-risk. Because of the discretion inherent in such a determination, there are no real constitutional guardrails to prevent this from happening. However, there is a reason we defer to officers of the court: We expect them to be able to balance these questions fairly. There is no reason to suspect they will not be able to do so in cases that involve AI any more so than in other contexts.

Additionally, the output of a predictive policing algorithm standing alone will likely never be enough to justify a warrant. While it has been suggested that the output of AI could be considered as a tip from a confidential informant, which is sometimes enough to justify a warrant on its own, AI cannot reach the same bar, highlighting its role as merely one factor among others. See Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 907 (2016). In particular, the information received from informants is subject to scrutiny from officers, who are required to gauge the level of detail and specificity that the informant provides. See Adams v. Williams, 407 U.S. 143, 144 (1972). In the case of AI, which makes predictions about criminal probabilities based on historical data, there can be absolutely no specificity to their recommendations. They have no real knowledge of any developing crimes that they could demonstrate to an officer to make their output more plausible. They are, quite literally, “merely reciting rumor or speculation,” which almost unequivocally does not rise to the level necessary for a search warrant to be granted. Klinkosum on Criminal Defense Motions (C. Melissa Owen ed., 2024).

It also has been suggested that AI could be viewed through the lens of the collective knowledge doctrine, as if the algorithm were an officer with knowledge sufficient to establish probable cause. Rich, supra at 896. But again, it is impossible to view AI in this manner because an algorithm has no specific knowledge of any given case sufficient to say that a crime is likely being committed. Therefore, although AI output may be factored into an application for a search warrant, it should never be more than one piece of information to be carefully weighed by a judge.

A potentially more concerning scenario is when AI is being used to inform searches without the oversight of a disinterested third party like a judge. In particular, law enforcement officers are permitted to conduct searches without first getting a warrant if they have a reasonable suspicion that a crime has been or will be committed. Reasonable suspicion is a belief based on specific facts that a person is involved in criminal activity. Terry v. Ohio, 392 U.S. 1, 21 (1968). This is an intentionally lower standard than probable cause, which allows officers to act quickly in the midst of potentially dangerous situations. See Alabama v. White, 496 U.S. 325, 330 (1990). Importantly, a person’s behavior is not the only factor that can give an officer reasonable suspicion; an officer also may consider his experience and other relevant information. United States v. Arvizu, 534 U.S. 266, 273 (2002). This creates another potential point of entry for AI. If AI flags a person or an area as being risky, an officer is more likely to interpret that person or an individual in that place as committing a crime. Such a use of AI is likely a constitutional part of a reasonable suspicion analysis when combined with other factors, as is the case for a probable cause analysis. Yet, in the context of the lower reasonable suspicion bar, an even closer eye must be paid to AI, as its output is likely to play a larger role in the analysis.

This greater impact makes even more salient one of the risks of officers considering AI outputs: the issue of “double counting.” If AI notes a factor weighing in favor of a search, it won’t tell the officer about this factor; it will only calculate the likelihood of a crime being committed. The officer may then detect this factor on their own. In such a case, the officer will consider the factor that has been detected, and the AI’s recommendation, which is already weighing that factor in its analysis. That factor is therefore counted twice. This suggests that when AI input is used to inform reasonable suspicion, officers may consider themselves as having reasonable suspicion in cases where they would not otherwise.

To make this concrete, an officer may be on the fence about whether she has enough reasonable suspicion to search someone who appears as if he might be selling drugs outside of a liquor store. She considers both his behavior and the fact that he is outside of a liquor store. She then turns to an AI tool, which calculates that there is a high likelihood of someone selling drugs in this area. This is the deciding factor that pushes her to make a search. However, if the AI factored into its output that the suspect was outside of a liquor store, and this was already in the officer’s analysis, she has weighed it twice. The fact that the AI recommendation pushed her over the tipping point into reasonable suspicion, but its recommendation was based on double-counted information, suggests that she did not actually have reasonable suspicion. A search coming out of this suspicion is therefore more likely to be unconstitutional.

The fact that AI recommendations are a black box where it is unclear how various factors are being weighed is the most troubling factor of their use in law enforcement scenarios. As in the above, they run the risk of making certain factors seem more salient than they are in reality. They also create the risk that police officers and officers of the court will put too much weight on their recommendations. The data sets and programming that go into artificial intelligence are enormously complicated, leading people to generally trust the recommendations they give, despite not understanding how they’re made. This can create scenarios of blind trust, which is especially problematic given that we actually don’t know how well many of these AI programs work. The new OMB policy requires an impact assessment for “rights- or safety-impacting AI” used in federal programs, underscoring a glaring problem: Up to this point, we have not systematically weighed the costs and benefits of AI currently being used in policing. What Does the New White House Policy on AI Mean for Law Enforcement?, Policing Project (Apr. 16, 2024). We have no idea how often AI programs give outputs that accuse the innocent, spare the guilty, or aggravate existing biases.

Bias is also a key concern when it comes to AI and the recommendations it makes. Critics have pointed to the fact that AI suffers from the worst of both worlds when it comes to its ability to “reason”: It lacks the human ability to take things on a case-by-case basis, yet it is still susceptible to the biases of the data scientists who coded it. See Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (1st ed. 2016). This is a fundamental problem for AI. Since the recommendations it makes are based on data it has been fed, every baseless stop, every incident discovered by the overpolicing of impoverished areas, and every racially biased conviction live in those data and are repeated with each risk calculation that it makes. Further, race and ethnicity are data points that are commonly collected in arrest reports and other sources of information fed to predictive policing algorithms. Given that minority groups face higher rates of prosecution, it is possible that predictive algorithms are using race as a proxy for criminality, creating an even more vicious cycle of policing. Therefore, bias likely exists at multiple levels related to these algorithms in ways that may seem beyond question by the police using the data it generates.

The Constitutionality of Other Potential Uses

The above discussion attempts to shed light on the potential constitutional concerns around predictive policing algorithms, with a focus on their ability to inform judges and police officers and to provide an additional data point in probable cause and reasonable suspicion analyses. This ability of AI to “augment a decision” seems to be the blurred line separating constitutional from unconstitutional uses of AI. There are two other ways that AI can be used, each falling on other sides of that blurred line: to inform police patrolling and to make emergency decisions. The use of AI in patrolling seems squarely constitutional, while the use of AI to make decisions in emergency scenarios is highly suspect.

In the case of patrolling, the recommendations generated by predictive policing algorithms are not directly being used to justify a search. In fact, the output of these algorithms never even goes in front of a judge or a police officer. Instead, the AI’s output on high-risk people or places is used for purely logistical purposes, as one factor in informing the decision of where to send officers on patrol. This allows AI to inform where officers will be sent, increasing the likelihood that they will patrol high-crime areas, without these officers being primed to make searches by having received a prediction of a high likelihood of criminal activity from an algorithm. While using predictive policing at all runs the risk of perpetuating biases, using it in the context of establishing patrol locations does not implicate the same constitutional issues as using these algorithms in a probable cause or reasonable suspicion analysis. Given that police have wide latitude in deciding where to patrol, this is almost certainly constitutional.

On the other hand, it has been suggested that the role of predictive policing algorithms could go far beyond suggesting patrolling areas or shoring up warrant applications, to the point where AI would be used to make emergency decisions. See Kenneth Kushner, AI-Enhanced Decision-Making, Police Chief Mag. (Apr. 17, 2024). For example, information on a proposed warrant could be entered into a computer system, which would immediately analyze the situation and then grant or deny the request. The logic behind this is tempting: AI can make decisions much more quickly than a person can, and when time is of the essence, this could allow officers to spring into action. While the idea of AI granting a warrant may seem far off, it may be within the realm of interpretation of current law, such as the collective knowledge doctrine, as discussed earlier. If a predictive policing algorithm were to calculate that there are circumstances justifying a warrant, and this prediction is conceived of as information from another officer, the officer would be justified in acting on just this recommendation to conduct a warrantless search.

However, moving forward with such a use of AI is both risky and unnecessary. Most notably, officers are already enabled to act without a warrant in exigent circumstances, and they do so frequently. See Kentucky v. King, 563 U.S. 452, 460 (2011); see also Riley v. California, 573 U.S. 373, 382 (2014). Using AI as a stopgap for quick authorization in emergency situations is thus largely unnecessary. Additionally, as discussed, AI lacks true human reason. A decision made by AI alone is therefore not the equivalent of a decision made by a judicial officer, which is a requisite to establishing probable cause in many cases. See Gerstein v. Pugh, 420 U.S. 103, 118 (1975). On the whole, putting so much unfettered discretion in the hands of a machine flouts the right to be free of unreasonable searches.

Next Steps

The most constitutionally gray area of the use of predictive policing algorithms is their place in establishing probable cause and reasonable suspicion. In this area, it seems as if constitutional protections can pick up the slack of minimal legislation by checking the role that AI can play in these analyses. Even so, in the future, we would be well served to better define the exact parameters limiting the use of predictive policing algorithms. Specifically, relevant legal factors that are worth exploring include when, if ever, AI output is actionable when standing alone and what other specific types of evidence are required to shore up algorithmic outputs to reach the bar of probable cause and reasonable suspicion. Given the Supreme Court’s desire to maintain flexibility in these standards, it seems unlikely that legal professionals should expect concrete guidance, but a general framework for how AI fits into existing procedure is sorely lacking.

Additionally, to truly pin down the weight that AI outputs should have in constitutional analyses, we need more information about these algorithms. Most importantly, we need to know what data points are in a given predictive policing algorithm. First, this can help to avoid the problem of double counting. Second, knowing the granularity of these data can allow us to better evaluate the relevance of a given recommendation to a real case. Third, knowing how long ago these data were collected also can suggest how much credence we can put into an algorithm’s recommendation. We likely wouldn’t trust an informant making a tip based on decade-old data, yet we have no way to know when the data feeding these algorithms were collected. Finally, knowing what data inform a given algorithm lets us better check the logic behind its recommendations. While it’s true that AI can pick up patterns that humans can’t detect, we should be wary of acting on recommendations built on trends that seem impossible to rationalize.

Despite the important benefits that would flow from access to these data, getting them is likely to be a tall order. The data that feed these algorithms may be sensitive, leading organizations to protect them for privacy reasons. In addition, the algorithms themselves may be proprietary. Great expense in terms of both time and money goes into the development of an algorithm, and it is natural for an organization to want to protect this investment by keeping the way the algorithm functions private. Institutions working to investigate predictive policing algorithms already have tried to obtain more information about the data that feed these algorithms, but they have encountered significant resistance, with police departments like the NYPD refusing to share their data sets. Lau, supra.

Even if all of this information were to be made public tomorrow, problems would remain. In particular, difficult questions would be raised with respect to the amount of trust that should be put into various data points. For example, how recently should data have been collected for them to be considered sufficiently trustworthy? Is it unjust to factor in certain data points like race or ethnicity, or is doing so necessary for an algorithm’s accuracy? While having more data won’t resolve these difficult debates, it would certainly be a step up from only speaking in hypotheticals when discussing these issues.

Conclusion

This discussion has sought to explore the constitutionality of predictive policing algorithms. It has been suggested that these algorithms can play a limited role in justifying warrants and in conducting warrantless searches, that they can take a larger role in identifying what areas should receive general police presence, and that they should never be the sole authority to decide that there is probable cause or reasonable suspicion in a given circumstance. The fact that algorithms are mere manipulations of historical data that cannot truly reason about the facts of a particular case allow existing constitutional safeguards, such as limitations on the trust given to informants or to collective knowledge, to be applied to limit the use of AI. When police officers or judges place too much reliance on algorithmic outputs, the line may be crossed into unconstitutional conduct. As it stands, it is difficult to tell how much reliance should be put on these algorithms generally due to the secrecy surrounding their workings. Important factors to consider in deciding how much to rely on algorithmic outputs include the age of the data, the data’s granularity, and to what extent the factors it has relied on are already a part of an officer’s analysis. By continuing to push for the release of algorithmic data and considering the difficult questions around how much these data should be trusted, we can move forward into a future where these algorithms help to detect and prevent crime without imperiling our constitutional protections.

    Author