chevron-down Created with Sketch Beta.

ARTICLE

Algorithmic Decision-Making in Child Welfare Cases and Its Legal and Ethical Challenges

Matthew Trail

Summary

  • Child welfare attorneys are increasingly encountering algorithmic models aiding in foster care decisions, with models being used to assess potential child abuse, determine services and placement, and predict reunification success.
  • These models, driven by big data, aim to enhance decision consistency and outcomes but have faced criticism for biases, data issues, and arbitrary weights.
  • Existing laws and ethical guidelines provide limited guidance on how attorneys and courts should interact with predictive models in child welfare.
  • Attorneys are urged to be aware of these models, understand their implications, and consider ethical duties of competence, diligence, and supervision. 
Algorithmic Decision-Making in Child Welfare Cases and Its Legal and Ethical Challenges
Sally Anscombe via Getty Images

When I was representing foster youth in Texas, I used to have contested court hearings every few months about the placement service level for my clients. The service levels determined how much money a foster home would receive, and so they were often very important in maintaining a stable placement. The service level determination was made by a third-party contractor in Dallas, and no caseworker or supervisor from Texas Child Protective Services (CPS) ever was privy to how those decisions were made. The caseworkers would just come to court and report the change, and it was always a mystery to the parties and the court why those folks in Dallas made that particular decision. What I learned after becoming a researcher, which I wish I had known as an attorney, was that I was never actually arguing with a real person’s decision; I was arguing with the decision of an algorithm.

This experience is going to become more common because, in the last decade, child welfare agencies have increasingly used big data to develop and implement predictive models to help them make decisions about the lives of children in foster care. With the goal of improving and making more consistent decisions, the models are attractive because the promise of better decisions should also lead to better outcomes for children. However, these models are far from perfect, and they have attracted criticism for their use of biased data, disregard of individual rights, and arbitrary weights.

Though predictive models have generated a lot of discussion among researchers, they are not generally well known by attorneys representing foster youth and families. Sometimes this is through simple ignorance that an agency has begun using a model, but sometimes this is because the creators of the model actively planned to exclude attorneys and judges. In my research (submitted for publication) on child welfare predictive models and legal decision-making, I found that child welfare attorneys can have their legal decisions changed by exposure to these models. Therefore, it is imperative that child welfare attorneys and judges understand when and how predictive models are being used and raise objections if need be.

The Use of Models in Child Welfare

At this point, most of us are familiar with the ubiquitous algorithm models that suggest something we should buy online, the film we should stream, or even directions across town during traffic. When Netflix’s algorithm recommends a film you do not like, the stakes are pretty low, but what about an algorithm that recommends that the child welfare agency investigate a family or determine the services and placement of a foster child?

The earliest versions of predictive models, trialed in California and Illinois, were designed to predict potential child abuse, but the models failed so badly with false positive overidentification that both states terminated the programs. Despite those failures, new models aimed at identifying potential child abuse followed. The American Civil Liberties Union reported that at least 26 state child welfare agencies have used or were using predictive models, though this is likely an undercount because there is no requirement to report that models are being used. Risk assessment seems to be the primary use of most of the models, though some jurisdictions are using models for other purposes, such as matching children with treatment programs, matching children with foster homes, or even predicting reunification success.

The risk model with the most exposure is the Allegheny Family Screening Tool (AFST) out of Allegheny County, Pennsylvania. Taking advantage of the county’s integrated data system, researchers were able to create a model using court, CPS, police, school, hospital, and other public records to build a predictive model that creates a risk score for each child in the county. When potential abuse is called into the hotline, screeners are given a predictive model score between 1 and 20 that represents the model’s risk assessment for that child. The scores are given after a human reviewer makes an assessment, but high scores above an 18 require mandatory screen-ins, though those can be overruled by human supervisors.

Proponents of the AFST say it reduces racial bias and accurately predicts cases of serious child injury. Critics say the model pulls from biased public data; promotes racial overidentification; locks families into data from the past, which may not accurately reflect the present; and has unintended negative impacts on people with disabilities. In fact, the last concern has prompted a Department of Justice investigation into the AFST.

Regardless of the concerns, other jurisdictions in multiple states have adopted models based on the AFST, often with little fanfare or announcement. California, Colorado, Oregon, and other parts of Pennsylvania have or are currently working with AFST-style family models, though they are built using local data. This should concern child welfare attorneys and judges because one of the founding ethical features of these models is that the risk score numbers are intentionally withheld from the court to prevent a risk score number from biasing judges, which means that attorneys and judges cannot consider this key piece of evidence affecting the case.

So imagine for a moment you are representing a family under investigation, a child removed from the child’s home, or even the CPS agency itself, and there is a piece of evidence from a supposedly sophisticated, scientifically created child abuse risk score that is one of the reasons that led to the current case, but the state CPS agency says it is unethical to let you or the court see it because it might bias you. I cannot imagine many attorneys or judges being content with that answer.

The interesting twist in this story is that my study presented U.S. child welfare attorneys with a realistic removal and placement decision followed by a risk score, which did sometimes change legal decisions, making them more or less likely, depending on the risk score, to support removal and foster placement. This is not particularly surprising in light of a lot of research that has shown that machine advice can change people’s opinions. What it does not mean, though, is that attorneys and courts are so delicate that we need protection from the advice of predictive models lest we be unable to form our own legal conclusions. In fact, while my experiment did demonstrate that models can, in this context, influence and change legal placement and removal decisions, attorneys mostly stuck to their first decisions regarding the case, despite the influence of the risk score.

There is still an open question regarding what would happen if a risk score was given at the beginning of a case and whether that score would have a greater effect on legal decisions or even possibly become an anchor score influencing all later decisions. This is a legitimate concern, but there is no direct evidence for it in this child welfare context at the present, and even if there were, dealing with sensitive and possibly prejudicial evidence is certainly within the capabilities of the legal system. Outside ethical constraints imposed by nonlawyer researchers are unnecessary.

One of the common criticisms of all algorithmic models in general is that they are essentially black boxes—input goes in, something mathematical happens inside, and then the model produces an output. Probably most attorneys would not benefit greatly from having the model’s source code, given that we lack the level of coding and algebra to make much sense of it. Still, there is a push by some of model designers to have what is called explainable artificial intelligence (AI). This means that while we may not understand the exact workings of the model, we can understand why it produces the results it does and we can understand the limits of the model and why it might make mistakes. While this might not always result in improved decision-making, this should at least be the goal when we encounter a new model. Understanding the data the model is based on and how those data are weighed to produce an output, along with the error rate, will help attorneys to know if the model is being used appropriately for their specific case and the possibility that its predictions are correct.

Child Welfare Law and Predictive Models

But what guidance does the law give us when our clients are affected by a predictive model? At the moment, there is no child welfare–specific law to tell how attorneys and courts should be interacting with these predictions. The only actual case law regarding a somewhat similar type of model comes out of criminal law and the use of algorithms to predict recidivism. In Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), the Wisconsin Supreme Court, in a challenge brought by a criminal defendant in a due process violation complaint, allowed the use of a controversial proprietary algorithmic predictive model called COMPAS, so long as lower courts considered it along with a five-point warning about the possible flaws inherent in the COMPAS prediction.

Although the decision was well reasoned by the court, when tested experimentally by researchers at the Max Planck Institute, these warnings actually had no effect on people’s decisions regarding recidivism or on the accuracy of their predictions. So the research suggests that merely warning people about the use of algorithmic models might not achieve the desired results.

Beyond Loomis, there are a few unpublished decisions in which courts have considered an algorithmic model’s effect on other public welfare–type cases. K.W. ex rel. D.W. v. Armstrong was an Idaho case that challenged a Medicaid model that reduced benefits to people with disabilities. C.S. et al. v. Saiki et al. out of Oregon was also about reduced disability benefits. Arkansas Department of Human Services v. Ledgerwood dealt with a model that arbitrarily reduced full-time nursing hours for people with disabilities. Finally, there is a juvenile court decision out of Washington, D.C., challenging the use of a juvenile recidivism model. These suits all eventually settled in favor of the plaintiffs.

There is also ongoing litigation in cities and counties across the country regarding the use of predictive models by police departments, including models that target juveniles. So there are multiple ways in which a foster youth might be caught up by an algorithmic model that attorneys and courts should consider.

There are obviously many other lawsuits regarding private entities and corporations and their use of predictive models, including big suits such as the Department of Justice’s settlement with Meta, class action complaints against health insurers, and the Federal Trade Commission’s complaint against Amazon. The White House Office of Science and Technology Policy has issued what it terms a “Blueprint for an AI Bill of Rights,” and there is also a legislative proposal that calls for the overall governing and fair use of algorithmic models. Perhaps the most relevant comes from the Biden administration’s new October 2023 executive order governing the use of artificial intelligence in federal agencies. It included a draft memorandum from the Office of Management and Budget that laid out protections for the use of AI in places where people’s rights could be affected, and child welfare and custody are specifically mentioned. How these cases, executive orders, or proposals will affect future AI and predictive model use in your local court remains to be seen, but for the present, the state of the law regarding predictive models in child welfare is uncertain.

Our Ethical Duties

Even if the law is ambiguous, our ethical rules should provide a little clarity on how we should proceed when faced with the effects of predictive models on our clients. The American Bar Association (ABA), in its 2019 resolution on the use of artificial intelligence, laid out multiple ways in which technology might affect a legal practice and ultimately concluded that attorneys’ ethical duty of competence includes the understanding of technology. The ABA also stated that attorneys’ ethical duty to supervise nonlawyers could also have implications for the use of technology.

In the current context, this implies that child welfare attorneys representing children and parents should have a duty to know and understand the implications of predictive models that are being used by the child welfare agency. This also is invoked by the ethical duty to diligently represent our clients. For attorneys representing CPS agencies, their duty to supervise means that they have an additional ethical obligation to ensure that the work of the predictive model itself comports with legal ethical rules.

For attorneys who are in states with rules similar to Model Rule 8.4(g) regarding harassment and special classes, the ABA suggests that the use of technology with inherent biases could be enough to trigger a violation of this rule. Even in states that have not adopted this or a similar rule, every state’s judicial code of conduct includes a provision against discrimination that could arguably apply when courts hear a case involving the use of a potentially biased predictive model.

Interestingly, the creators of many of these models have drafted detailed ethical guidelines for their use, acknowledging the limitations inherent in biased historical data, and one reoccurring suggestion is the inclusion of stakeholders in the development and implementation of the models. However, the child welfare legal community seems to have been mostly left off the invitation list.

So what does our legal community need to do with these models? Certainly, the first step is simply awareness. If you are representing a foster child, a parent, or the CPS agency, you have a duty to learn if there are any algorithmic predictive models being used regarding your client. Is a model determining your client’s Medicaid eligibility? Is a model suggesting what placement or services a foster child should have? Did a model help give rise to the removal of the child from the child’s family?

Unfortunately, this probably will require more steps than simply asking the caseworker, who might not know. You might need to go to someone at the state office level to get the full answer. To the extent that the court allows it, you can also make formal discovery requests regarding the use of predictive models. Courts themselves should take up this issue and ask these questions of the parties.

In addition to simply finding out if a model has been used, we also need to learn about how the model was made, who made it, and what data they used in its creation. Was it developed internally by the state agency, or was it made by a third party? There are additional ethical concerns regarding models built by third parties that use proprietary software. Can someone explain the model’s accuracy and error rate?

As noted above, there is certainly legwork and possibly even discovery required to find answers to these questions. Where CPS agencies are reluctant to turn over model information, either because the software is third-party proprietary or because of misguided ethical attempts to protect attorneys from bias, lawyers may benefit from the intervention of a nonprofit legal firm with resources to assist.

For CPS agencies considering the use of a new predictive model, stakeholder meetings must include the child welfare legal community. The legal community in turn needs to be prepared to challenge the use of models, if need be, but also cooperate with CPS agencies in their use.

To be clear, the use of these models is not inherently flawed. Research suggests that they can help us make better decisions and can be used ethically. In some ways, they are similar to actuarial instruments, like the Child and Adolescent Needs and Strengths (CANS) or Structured Decision Making, that are routinely discussed in case planning and in court. The difference is that these new predictive models are mostly being used without the knowledge or oversight of the legal community, and it is our ethical duty to do both.

Remember that these models are sophisticated, but they are based on historical and often aggregate data. They make a prediction that with a degree of accuracy something might happen. A child might be neglected. A placement might break down. A service might be appropriate. This can be helpful information for sure, but it is just a possibility based on data points in the past. Our clients are more than a collection of data points.

    Author