chevron-down Created with Sketch Beta.
January 15, 2021 TECHNOLOGY

Artificial Intelligence: Benefits and Unknown Risks

By Judge Herbert B. Dixon Jr. (Ret.)

Perhaps no technology has stoked the dystopian fears of society as much as artificial intelligence (AI). This unease has been particularly apparent when considering AI’s potential for an increasingly central role in the justice system. For decades, science fiction staples like Minority Report and Blade Runner, with their dark portraits of a future, technology-driven police state, warned that along with the benefits of such technology can come unintended consequences.

We have experienced increasing comfort with AI technology in our daily lives and seen significant law-related uses of that technology. Some of the advances are in eDiscovery, predictive policing, forensic crime solving, facial recognition, and risk assessment in criminal cases for pretrial release and sentencing.

AI is a constant presence. We call out to AI personae named Alexa, Bixby, Cortana, Siri, and Google Assistant, asking all kinds of questions to which they dutifully reply. AI corrects our spelling, recommends movies, schedules appointments, and even offers medical advice. AI is no longer just a distant concept. It is here, but what are the risks?

In my column one year ago, I discussed how far AI technology had come in law and law enforcement. In this article, I take stock of where AI use might be headed and note a few red flags of caution that have emerged.

AI for eDiscovery and Document Review

One area in which AI has proven itself increasingly useful is eDiscovery and document review. The discovery process in litigation often involves substantial amounts of digital data, of which the time for human review might be measured in multiple years. AI applications for eDiscovery have proven themselves faster and more efficient and accurate than old-fashioned manual review by human hands. AI applications for eDiscovery have also ventured into image processing, including the ability to recognize persons, places, and things. (Suggested experiment: Go into the photo section of your smartphone, find the Search function, and type in animal, bird, tree, snow, water, or some other thing; now hit Enter or Search and marvel at the operation of this basic form of AI image analysis.)

AI in Law Enforcement--Predictive Policing

In law enforcement, AI has already been deployed in multiple ways. Predictive policing AI tools include location-based algorithms that form links between location, event, and past crime rate data and render predictions of where and when crimes are more likely to occur. PredPol, an algorithm used by police departments in several cities across the United States, breaks a city into grids, continually updating its crime predictions for each zone over the course of the day. Based on the hot spots the algorithm identifies, police can alter deployment plans and assign extra patrols.

Other predictive algorithms analyze people's background information, including their age, gender, marital status, history of substance abuse, and criminal record, to determine who is likely to commit a future crime--information the police might act on to intervene before a crime occurs. Additionally, AI-based facial recognition software has been employed to identify suspects from images caught on security cameras, cell phones, and other video sources.

AI for Crime Solving

There are several uses of AI for crime solving in development. Researchers are exploring the concept of "scene understanding," where algorithms draw context from the relationships and interactions between people, places, and objects--for example, a person exhibiting and firing a handgun--to either detect a crime in progress or aid in the investigations of the crime. Researchers are also developing algorithms to analyze gunshot audio captured on smartphones and other devices. These algorithms may have the ability to identify gunshots and indicate if multiple firearms were present, determine which gunshots came from which guns, and provide the likely caliber and class of weapons used.

AI technology has the potential to transform forensic science in criminal investigations significantly. As vast troves of forensic data from current and past cases are digitally stored in a centralized manner, software developers may design algorithms to integrate forensic findings from an expansive range of evidence, data, and relevant contextual information to simulate and reconstruct a crime scene. The researchers theorize that investigators may be able to run multiple simulations of the same crime event, utilizing different pieces of evidence or information in different ways, resulting in statistical probabilities for different scenarios, providing insight into how an event may have transpired, when, to whom, and by whom.

Judicial Use of AI for Risk Assessment

In recent years, courts around the country have been using AI-driven assessment tools to gauge the risk of recidivism of defendants in criminal cases. One popular application, called COMPAS, utilizes a series of questions answered by the defendant after arrest or pulled from their criminal records. Based on answers received, COMPAS issues a score of 1 to 10, quantifying the defendant's likelihood of rearrest if released, providing an assessment tool for pretrial release and sentencing determinations.

AI's Predictive Policing Risks

Despite the promise of AI, its use in policing and the courts has already revealed problems. A report by the National Institute of Standards and Technology found that the type of facial recognition algorithm used by police created more false positive results when it evaluated images of Black women. Further, researchers found that some facial recognition algorithms provided only slightly better than 50 percent accuracy. Such results have prompted cities like San Francisco to ban the use of AI facial recognition.

For predictive policing applications, such as PredPol, some researchers have raised concerns that the algorithms are tainted, having been trained on so-called dirty data. For example, in 2010, the Department of Justice (DOJ) investigated the New Orleans Police Department, resulting in a scathing report. The DOJ found repeated constitutional and federal law violations by the police department--specifically concerning the use of excessive force disproportionately against Black persons; targeting racial minorities, non-English speakers, and LGBTQ individuals; and failing to address violence against women. A year later, the city entered into an agreement with the firm Palantir to deploy AI-based predictive policing. Evidence suggests the algorithm developers failed to clean or correct the historical data, including arrest records and police reports, of the violations noted in the DOJ report before using the data. AI Now, a research center dedicated to studying the social impact of AI, conducted a study examining 13 jurisdictions that used predictive policing algorithms and had been subject to government investigations. In nine of the jurisdictions, the study found strong evidence that AI systems had been trained on dirty data, which will likely result in excessive or insufficient deployment of police resources to the same communities that were the subject of the tainted data.

Judicial Use of AI

The desire within the criminal justice system for a tool to assess the risk of recidivism is understandable. Proponents believe AI could offer the key to reducing human error and bias in the courts. If algorithms could accurately predict recidivism, the increased fairness and accuracy of such decisions would permit courts to be more selective about who is imprisoned and for how long. There have been some positive results reported from the use of risk assessment tools. The American Civil Liberties Union noted that a risk assessment algorithm adopted as part of the 2017 New Jersey Criminal Justice Reform Act resulted in a 20 percent reduction in the number of people incarcerated while awaiting trial.

However, in 2016, ProPublica conducted a study of the COMPAS risk assessment tool, with startling results. Analyzing the risk scores of over 7,000 people arrested in Broward County, Florida, ProPublica's report found that within two years of their arrests, "[o]nly 20 percent of the people predicted to commit violent crimes actually went on to do so." Further, the algorithm appeared to exhibit bias, falsely flagging Black defendants as future criminals at almost twice the rate of white defendants.

Two cases in the report are illustrative. One involved 41-year-old Vernon Prater, who was arrested for shoplifting tools from a Home Depot store worth $86.35. He had previously been convicted of armed robbery and attempted armed robbery and served five years in prison. Another involved 18-year-old Brisha Borden, who was arrested after she and a friend stole an unlocked Huffy bicycle and a Razor scooter valued at a total of $80. Like Prater, she had a record, but for juvenile misdemeanors. However, COMPAS had assessed Prater, who is white, as a low crime risk of 3 on a scale of 10. Borden, who is Black, was assessed as a high crime risk of 8 on a scale of 10. These predictions later proved to be backward. Within two years, Prater was arrested for breaking into a warehouse and stealing electronics worth thousands of dollars. He was convicted and sentenced to eight years in prison. Two years after her arrest, Borden had not been charged with any additional crimes.

Crowdsourced Risk Assessment versus AI

In 2017, researchers from Dartmouth College conducted a study where they enlisted 400 volunteers through a crowdsourcing website, asking them to predict the recidivism risk for selected individuals. Volunteers were broken into groups of 20, each group reviewing a subset of 50 defendants that had been part of ProPublica's investigation of the COMPAS risk assessment tool. Each volunteer was provided with brief information on the defendants and asked to guess if these defendants would again commit a crime within two years. After a series of complex statistical analyses, the Dartmouth College researchers reported that the crowdsourced predictions, with considerably less information, were "as accurate as COMPAS at predicting recidivism." Moreover, the crowdsourced predictions and COMPAS's predictions were in agreement for 692 of the 1,000 defendants.

Judicial Discretion versus AI

Finally, the question remains whether AI ultimately will be capable of handling the nuances required of many judicial determinations. Legal problems often require judges to contextualize and balance separate interests in reaching their decisions. AI algorithms tend not to be very good at these things. Instead, they excel at identifying patterns within data and, assuming the data were untainted by human or other bias, have the ability to produce consistent and predictable results. However, consistency and predictability are often not the same as fairness. In coming to a fair decision, judges must balance competing values where there is no clear legal answer. Machines have difficulty operating in a realm that allows for so much discretion. In the previously mentioned cases of Vernon Prater and Brisha Borden, the disparate risk assessment scores they received after their arrests provide an example of COMPAS's failure to contextualize the information as we hope would be done in an appropriate exercise of judicial discretion.

Judge Noel L. Hillman of the U.S. District Court for the District of New Jersey, who authored a guest article for my technology column in 2019, discussed The Use of Artificial Intelligence in Gauging the Risk of Recidivism. The various concerns posed by AI risk assessment algorithms led him to conclude that "to date, the use of AI at sentencing is potentially unfair, unwise, and an imprudent abdication of the judicial function."

Final Thoughts

AI is here to stay and will play an increasing role in our personal lives and the criminal justice system. We cannot look away from the potential it offers in improving law enforcement and our courts. It may be necessary, however, to step back and take stock of AI's implications. While some may ask if there are areas of discretionary decision-making where the use of AI will never be appropriate, others suggest that more time is needed to validate and improve these technologies before moving forward with them.

Judge Dixon wishes to thank James L. Anderson, Esq., Superior Court of D.C. Senior Judges' law clerk, for his assistance researching the topic of artificial intelligence and preparing this article.

    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Judge Herbert B. Dixon Jr. (Ret.)


    Judge Herbert B. Dixon Jr. retired from the Superior Court of the District of Columbia after 30 years of service. He is a former chair of both the National Conference of State Trial Judges and the ABA Standing Committee on the American Judicial System and a former member of the Techshow Planning Board. You can reach him at [email protected]. Follow Judge Dixon on Twitter @Jhbdixon.