chevron-down Created with Sketch Beta.
June 26, 2024 Feature

Mitigating Algorithmic Bias: Strategies for Addressing Discrimination in Data

Sonia M. Gipson Rankin

This article examines the pervasive issue of algorithmic bias, particularly within large language models (LLMs) and the legal system. It argues that unlike simple programming bugs, these biases are deeply ingrained in the design and training data of artificial intelligence (AI) systems. By understanding the historical roots of bias and its real-world consequence across various sectors, we can develop effective strategies to mitigate its impact and ensure AI serves as a tool for progress. Weaving together historical insights, case studies, and forward-looking recommendations, the article aims to equip legal professionals with the knowledge and tools necessary to lead the charge against the perpetuation of bias in AI systems.

The Inherent Biases of AI: Beyond Bugs

Programmers understand the inevitability of bugs—errors in code that lead to unintended outcomes. A programming bug is described as an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected outcome, or to behave in unintended ways. Ada Lovelace was one of the first to point out the possibility of mistakes in computer solutions in her programming work. Her concern was about the possibility of incorrect instructions being executed by Charles Babbage’s Analytical Engine, an early computer, due to errors in the program cards inserted into it. This scenario, where the machine performs actions other than those intended, essentially pointed to what we now refer to as a “bug” in programming. However, unlike these relatively straightforward glitches, algorithmic bias is a far more complex issue—for bias is not just a bug in the system, but rather inherent in the design of large language models. And in the legal landscape, the pervasiveness of bias within LLMs makes it challenging to administer justice.

Instead of traditional linear programming, AI, particularly in forms like generative AI and LLMs, relies on neural networks to sift through extensive datasets, identifying patterns and making predictions. Bias in AI is a pervasive problem rooted in the architecture of these systems and the choices made during the design process, data selection, and neural networks used in AI. For instance, when looking at the design process, Timnit Gebru and her colleagues conducted research on how design choices can introduce bias in LLMs. They introduced the idea of “stochastic parrots”—LLMs that might seem intelligent but are really just copying patterns from the data they are trained on. This can be problematic because the data itself might contain biases, which the LLM would then unknowingly perpetuate. Their research highlights how specific design decisions—like what data are used, how much weight they are given, what algorithms are chosen to train the model, and how success is measured—can lead to biased outcomes.

Additionally, the challenges posed by AI’s “black box” nature are particularly acute in legal settings, where the rationale behind decisions must be explainable and justifiable. The term “black box” in AI refers to models where the internal workings and decision-making processes are difficult for humans to understand or even discern. Like a physical black box, AI models have clear entry points for data (inputs) and produce outputs (predictions or recommendations). The hidden part is the complex mathematical relationships and algorithms within the model that transform the inputs into outputs. These internal workings are often opaque and not easily interpretable by humans. The lack of transparency further hinders the identification of bias within AI systems. Algorithmic bias refers to consistent, reproducible errors in AI systems that lead to “unfair” outcomes. These biases can perpetuate or even amplify societal inequalities. These biases come from developers’ own biases or from using biased or incomplete data, embedding societal prejudices deep within the AI. The opaque nature of LLM outputs poses a significant hurdle when integrating them into legal applications.

The allure of AI lies in its promise of personalization and efficiency, tailoring experiences and decisions to individual patterns and precedents. This personalization, however, comes at a significant cost: the narrowing of our information horizons and the potential reinforcement of existing societal biases. This is not speculative; research has consistently shown how AI-driven recommendations can amplify personal biases, potentially diminishing our collective awareness and understanding. This is largely due to the feedback loops created by user-generated data, where the algorithm may display content more frequently based on users’ interactions, potentially reinforcing societal biases. For instance, in the case studied by Latanya Sweeney, searches for names commonly associated with African Americans resulted in more ads featuring the word “arrest” compared to searches for names commonly associated with white individuals, highlighting how AI systems can perpetuate and even amplify existing biases. Furthermore, AI algorithms can unintentionally pick up and act upon societal correlations that may be considered unacceptable or even illegal, such as reducing lending to older individuals based on a statistically higher likelihood of defaulting, which could be seen as illegal age discrimination. The complexity of defining and measuring fairness in AI, with over 21 different definitions identified by researchers, adds another layer of difficulty in addressing and minimizing bias in AI systems. For the legal community, this raises deep concerns about transparency and the ability to explain and justify decisions influenced by AI, a cornerstone of legal practice.

As AI increasingly merges with the legal system, it heralds a new era of potential efficiencies and innovations that promise to automate routine tasks, improve legal research, and refine case management and decision-making processes. But the merger of AI and legal practice also presents complex challenges that require careful consideration. The narrative around AI often conjures images of futuristic, existential threats like those depicted in science fiction like Terminator. Yet, the real and present concerns about AI are more grounded and have immediate implications for society. These concerns include cyberattacks, untrustworthy data, and disparate outcomes among other critical areas. As technology advances, cyberattacks will also become more complex and pose a bigger threat in the digital world. AI relies on data for training, and if these data are flawed, the AI itself becomes flawed. This can lead to serious problems, such as people being unfairly denied loans or health care.

Evidence already highlights the urgency to address algorithmic biases, especially when disproportionately impacting vulnerable groups. These biases represent deep-seated challenges within AI systems—not mere “bugs” to be patched, but fundamental challenges in the foundation of algorithmic design and training data. For the legal community, the imperative is clear: to adopt a vigilant approach that encompasses ethical, legal, and technical measures aimed at mitigating such biases.

Historical Roots of Bias: A Legal Legacy

The legal system itself has a long history of entrenched biases. At the core of this bias in AI lies a deeply concerning reality: data shaped by historical racism. Historical and systemic racism, entrenched within legal structures such as the Virginia slave codes and the Dred Scott decision, not only have dehumanized and marginalized communities but also have sown the seeds of bias that continue to influence the datasets used to train AI today. The Virginia Slave Code of 1642/3—ACT I defined “tithable person” to include Black women, a legal distinction to show Black women were different than white women, who were not tithable. This statute enacted the first legal distinction between English and African women. The difference reflects that fact that Virginia’s legislators believed that English and African women would play dissimilar roles in the colony, despite their shared field labor on plantations. But this act encoded racial difference into law. The 1662 law stated that children of enslaved women were automatically born enslaved. This was to clear up the confusion over what to do with the mixed-race children born as the result of sexual relations between white men and enslaved women. This law made it profitable for white men to sexually assault the women they enslaved and impregnant them. And in 1671, the Virginia General Assembly adopted a statute that equated “negroes” with “sheep, horses, and cattle.” In the Dred Scott decision, Chief Justice Roger Taney enshrined into legal history that Black people “had no rights which the white man was bound to respect.” These discriminatory legal structures of the past continue to cast a long shadow, influencing the information used to train AI and perpetuating racial bias in its outputs.

These discriminatory legal structures continue to cast a long shadow, with biased data perpetuating racial prejudice in AI outputs. A study by Esmita Charani and colleagues highlighted how stereotypical portrayals in global health, like the “suffering subject” and “white savior,” are reinforced through the imagery used in academic publications. The scholars attempted to use Midjourney Bot, an AI tool, to move beyond these tropes but found it impossible to generate images of Black doctors treating sick white children without defaulting to existing prejudiced narratives. Even with prompts designed to reverse the stereotype, the AI overwhelmingly depicted Black people as the ones receiving care. These issues extend to wrongful identifications and arrests of Black individuals by facial recognition technologies, further complicating matters of racial injustice. Additionally, AI has been responsible for producing stereotype-laden images based on race and gender or exacerbating negative outputs for protected classes. These are not isolated flaws but reflections of the deep-seated biases embedded within AI’s training data, a direct consequence of historical inequalities. Technology struggles with the subtleties of human identity and interactions. These examples highlight how historical injustices are woven into the very fabric of AI training data and more fair and representative datasets are needed.

Real-World Implications of AI Bias

When was the last time you consciously interacted with an algorithm? Consider the subtle nudges from AI in your daily life, the data it uses to tailor what you encounter, and the choices it might be removing from your options. As the legal community navigates the increasingly digital landscape of jurisprudence, the need to examine the foundations of AI has never been more pressing. This seamless integration of AI into daily lives, from the algorithms curating our newsfeeds to those predicting recidivism rates, is a complex web of hidden biases that could fundamentally challenge the principles of fairness and justice central to the legal system.

Additionally, the lack of explainability in LLM decision-making poses significant challenges, especially in legal spaces where understanding the reasoning behind a decision is crucial. Justifying verdicts and establishing evidence, intent, and fault all rely heavily on explainability. This difference is crucial in legal settings, where the ability to explain and justify decisions can be as important as the decisions themselves. Opaque decision-making processes can hinder due process and erode public confidence in the legal system. Therefore, addressing algorithmic bias is crucial to ensure the responsible and equitable application of AI within the legal domain.

The consequences of algorithmic bias extend far beyond theoretical concerns. The legal system itself has been implicated in the perpetuation of bias through tools like COMPAS, which disproportionately labels Black individuals as high risk for recidivism. Such technologies, rooted in biased data, threaten the very foundation of fairness that the legal system seeks to uphold. Healthcare algorithms can discriminate against Black patients, leading to unequal access to treatment. Employment software that filters out candidates based on age or gender further entrenches existing inequalities. These are just a few examples of how AI bias manifests in real-world scenarios, raising significant legal and ethical concerns.The challenge is not only to identify these biases, but also to address the systemic problems they reflect and perpetuate. The legal community has a crucial role to play in mitigating algorithmic bias. We need laws to make sure AI is fair and transparent, especially in the legal system. This means setting data quality standards and requiring algorithmic audits.

Frameworks for Combating AI Bias: Building Fairness

The issue of mitigating bias in LLMs and other AI systems is not just a technical challenge but a fundamental concern that aligns with the principles of justice, fairness, and transparency. Recognizing and addressing the multifaceted nature of algorithmic bias are essential for the development of effective mitigation strategies that are consistent with these principles. To ensure fairness and responsible use of AI in legal practices, a multipronged approach is necessary. First, algorithms must be implemented cautiously, with a clear understanding of their limitations. Next, developing strong strategies for data cleansing and implementing bias-aware algorithms are crucial in preventing AI from perpetuating inequalities within the legal system. Finally, continuous monitoring of AI applications is essential to identify and address potential problems that may arise. This requires concerted efforts that include ethical considerations, technological advancements, and, crucially, a deep engagement from the legal community at every stage of AI development and implementation. Mitigating algorithmic bias requires a multipronged approach that goes beyond simply fixing technical glitches. Here are some key strategies:

  • Embedding legal and ethical principles: AI development must be grounded in ethical and legal principles of fairness, nondiscrimination, and accountability. Legal frameworks can mandate diverse data sets and bias detection algorithms during development.
  • Fostering collaboration: Effective solutions require collaboration between legal professionals, technologists, ethicists, and policymakers. The legal community’s expertise in due process, equal protection, and privacy rights is crucial for guiding the ethical development of AI. This collaborative effort is crucial for navigating the intricate balance between technological innovation and legal and ethical standards.
  • Promoting public engagement and transparency: Legal professionals particularly must be sufficiently informed to enable them to advocate for policies and regulations that promote transparency in AI algorithms, decision-making processes, and data usage. A transparent approach in AI development that requires openly facing its potential impact in so many areas is necessary to gain trust and be accountable to the public. This includes calling for clear documentation, audit trails, and the development of trustworthy AI and reviewing and challenging AI-driven decisions, especially in critical areas like criminal justice, health care, and employment.

The legal community can play a pivotal role in defining and establishing these frameworks, overseeing their implementation, and ensuring that they are upheld, thereby preventing the perpetuation of historical injustices and biases.

Where there is no law, AI will develop without any boundaries. This is both exciting and concerning, as it raises crucial questions about its potential effects and the people it will impact. We cannot ignore past prejudices, but we need to understand them to address equity and justice moving forward. These considerations invite us to contemplate the unique queries when addressing algorithmic bias. First, there is the dual nature of feedback on algorithms: It can either improve their utility or amplify their detrimental effects in areas like loan approvals or criminal justice. The visibility of algorithmic biases presents a paradox; although they clearly reflect social prejudices, could they also lead us to mitigating these biases, which are reflected in our data? The inherent challenges of addressing algorithmic bias—such as the balance between individual rights and societal needs—highlight the unique role of the legal community in shaping the future of AI. Legal professionals are tasked with tackling the profound questions of how to use AI in a manner that respects individual rights while promoting the common good. This involves grappling with the limitations of current technologies and advocating for advancements that enable fairer, more equitable outcomes.

As AI technology reshapes various sectors, its potential to revolutionize the legal system is promising and fraught with challenges. However, to truly realize this transformative impact, we must prioritize the development of trustworthy and bias-free systems. The legal community has a crucial role to play in ensuring that AI serves as a tool for progress, not a continuation of historical injustices. By advocating for ethical guidelines, collaboration, and public engagement, we can build a future where AI upholds the principles of a fair and just legal system.

Bugs occur in coding during development; any programmer will tell you this. But if bugs are being discovered during production, this is a sign of inadequate testing. Bugs in coding can be reduced with simple, logical code that follows a basic train of thought. The same effort must be taken by the legal community to root out the errors of bias in the data and the resulting harm caused by bias through AI systems.

It is imperative that the legal community engages its collective efforts to address the deep-seated prejudices within the data and assumptions underlying AI systems. These biases pose a significant threat to the legal system’s foundational principles of justice and equity. By ensuring transparency and accountability in the design and implementation of AI, we can guarantee that these systems serve as tools for progress rather than perpetuate historical injustices. In this critical moment, the engagement of the legal community is indispensable. Together, we can harness AI’s potential to be a force for good, creating a future where technology upholds and advances the principles of a fair and just legal system.

Sonia M. Gipson Rankin is a professor of law at the University of New Mexico School of Law, where she teaches in the fields of torts, family law, technology and the law, assisted reproductive technologies and the law, cyberTorts, and race and the law.

    Entity:
    Topic:
    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Sonia M. Gipson Rankin

    University of New Mexico School of Law

    Sonia M. Gipson Rankin is a professor of law at the University of New Mexico School of Law, where she teaches in the fields of torts, family law, technology and the law, assisted reproductive technologies and the law, cyberTorts, and race and the law.