chevron-down Created with Sketch Beta.

ARTICLE

Ensuring Trust in Artificial Intelligence by Understanding the Role and Importance of Economic Justice

Lakshmi Gopal

Summary

  • A literature review and an ABA survey reveal AI often worsens access to basic rights for low-income and marginalized communities, especially when tools lack adequate testing or oversight.
  • Economic justice is largely absent from core AI policy principles, making the development of inclusive, equitable AI systems an urgent legal and social imperative.
  • Lawyers must be trained to assess and mitigate AI risks, especially for vulnerable populations, and push for transparent, accountable, and just AI integration in public systems.
Ensuring Trust in Artificial Intelligence by Understanding the Role and Importance of Economic Justice
Cravetiger via Getty Images

Introduction & Overview

The American Bar Association’s Civil Rights and Social Justice Section’s AI and Economic Justice Working Group (the “Working Group”) was formed to address the complex challenge of ensuring economic justice amid rapid development and deployment of Artificial Intelligence and automated systems (“AI”). In simple terms, for the purposes of this Report, economic justice requires that no citizen be denied access to basic rights, privileges and opportunities simply because they cannot afford them. Looking at AI through an economic justice lens, the Working Group focuses on the impact of AI on access to basic rights, privileges, and opportunities. The United States has domestic and international obligations that require pursuit of economic justice. These obligations suggest that, at minimum, the development of AI ought not make access to basic rights, privileges and opportunities more costly and thereby more difficult. The Working Group operates on this principle. Moreover, given the transformative potential of AI, the Working Group also considers whether and how AI can be positioned to advance economic justice.

During the Fall of 2022, in the Working Group’s second year, amid growing concerns about the adverse impacts of algorithm bias and targeted advertising on racial minorities, the Working Group (comprised of ABA CRSJ Members Alan Butler, Grant Fergusson, Chris Frascella, Lakshmi Gopal, Marilyn Harbur, Alfred Mathewson, James Pierson, and Rubin Roy) set out to map AI’s impact from an economic justice perspective. To this end, the Working Group conducted a literature review to map key challenges (“Literature Review”). The methodology and results of this literature review are discussed in greater detail Part II of this Report.

Overall, the Literature Review revealed that while AI has significant potential to advance economic justice, when cursorily developed and deployed, AI can have a uniquely negative impact on a category of people that the Report refers to as “low-income and other economically marginalized groups.”. The Literature Review also revealed that these same groups face a broad variety of challenges related to the use and development of AI that other groups might not face. These include being more vulnerable to harm resulting from inaccurate or biased AI, more defenseless when it comes to seeking remedy and redress against these harms, less oversight and testing for tools specifically directed at low-income or other marginalized groups, and persistent and unavoidable (or expensive to avoid) technological failures.

To better understand these concerns and to fill potential knowledge gaps, the Working Group crafted a survey aimed at legal practitioners, the AI and Economic Justice Survey (“Survey”). As described in further detail in Part II, the Survey collected the views and understandings of close to 200 legal practitioners, representing a wide variety of geographic locations, practice areas, and organizational settings (“Respondents”).

This Report shares key insight from the Working Group’s Literature Review and Survey. It is organized into three parts. Part I provides a general overview of the development of AI and the norms developing around AI. Part II shares the Working Group’s findings based on analysis of the Literature Review and Survey. This includes an overview of the methodology and overall observations, patterns, and conclusions. The appendix provides a list of legal ethics opinions available at the time of publication on professional ethical responsibilities of lawyers when using AI (Appendix I), as well as an annotated bibliography (Appendix II).

The conclusion of the report is simple and straightforward. When developed and implemented with quality, care, and detailed attention, AI can be a boon for low-income Americans who lack access to justice. However, AI also presents the risk of discrimination, exclusion, and other harm. Here, technology is not the core problem. The core problem is a more virulent and all too human one: a lack of willingness to consider economic justice concerns in the development, deployment, and regulation of AI.

The working group recognizes the growing access to justice crisis facing most Americans. According to the Legal Services Corporation, 50 million Americans lack access to justice and 92 percent of civil justice issues go unaddressed each year. With greater attention on the role that AI can play in advancing economic justice, AI demonstrates clear potential to be a part of the solution to America’s growing access to justice problem. The AI Now Report provides a succinct synopsis of the challenge:

AI​ ​companies​ ​promise​ ​that​ ​the​ ​technologies​ ​they​ ​create​ ​can​ ​automate​ ​the​ ​toil​ ​of​ ​repetitive work,​ ​identify​ ​subtle​ ​behavioral​ ​patterns​ ​and​ ​much​ ​more.​ ​However,​ ​the​ ​analysis​ ​and understanding​ ​of​ ​artificial​ ​intelligence​ ​should​ ​not​ ​be​ ​limited​ ​to​ ​its​ ​technical​ ​capabilities.​ ​The design​ ​and​ ​implementation​ ​of​ ​this​ ​next​ ​generation​ ​of​ ​computational​ ​tools​ ​presents​ ​deep normative​ ​and​ ​ethical​ ​challenges​ ​for​ ​our​ ​existing​ ​social,​ ​economic​ ​and​ ​political relationships​ ​and​ ​institutions,​ ​and​ ​these​ ​changes​ ​are​ ​already​ ​underway.​ ​Simply​ ​put,​ ​AI​ ​does not​ ​exist​ ​in​ ​a​ ​vacuum.​ ​We​ ​must​ ​also​ ​ask​ ​how​ ​broader​ ​phenomena​ ​like​ ​widening​ ​inequality, an​ ​intensification​ ​of​ ​concentrated​ ​geopolitical​ ​power​ ​and​ ​populist​ ​political​ ​movements​ ​will shape​ ​and​ ​be​ ​shaped​ ​by​ ​the​ ​development​ ​and​ ​application​ ​of​ ​AI​ ​technologies.

This Report concludes that in addition to these questions, society needs to ask one more important question: Does a given AI tool or software make it easier or harder for low income and other marginalized groups to access basic rights, privileges and opportunities—including access to justice? Lawyers, and public interest lawyers in particular, can play an important role in ensuring and advancing economic justice by ensuring that this basic question is asked and answered. Lawyers, and especially public interest lawyers, cannot ignore AI—because it is not going anywhere. Lawyers must instead learn about AI, learn how to use AI, learn how to meaningfully engage with AI and learn how to advocate to ensure that AI has a positive impact on low-income and other marginalized groups.

Key Definitions

The Report uses many terms that have specific meaning in the context of legal practice. Such terms that are used frequently are defined below.

What is AI?

Artificial Intelligence (AI) enables computers to perform tasks that were heretofore associated with human intelligence. AI powers computers to not only read, see, hear, and speak but also to observe, listen, judge, analyze, communicate, respond, decide, and create in ways that humans do. There is no single definition of “Artificial intelligence,” which has been defined as “the capability of a machine to imitate intelligent human behavior.” Others have defined it as “cognitive computing” or “machine learning.” Although there are many descriptive terms used, AI, at its core encompasses tools that are trained rather than programmed. It involves teaching computers how to perform tasks that typically require human intelligence such as perception, pattern recognition, and decision-making.

Some terms that describe how AI works:

For the purposes of this report, the term “AI” includes both “traditional AI” and “generative AI.” AI technology created prior to generative AI is called “traditional AI,” “rule-based AI” or “symbolic AI.” This older form of AI was designed to approximate basic human cognitive functions in environments with well-defined parameters. For example, traditional AI allows computers to play chess and other games with clear, pre-defined rules. Most lawyers already use Traditional AI tools, for example, when they run searches on legal databases or when they automatically number documents using AI discovery software.

Generative AI refers to AI software that can generate or create text, images, and other content by studying patterns in pre-existing materials. Lawyers might use Generative AI tools, such as ChatGPT, to draft letters, contracts, briefs, and other legal documents. Generative AI tools analyze large amounts of digital text culled from the internet or proprietary data sources. This process is often referred to as “training” AI.

Some AI is described as “self-learning,” meaning it will “learn” or recognize and create patterns as it culls or trains on data. AI that is a self-learning model that can generate language, usually involves Language Learning Models or LLMs. “A language model is a machine learning model that aims to predict and generate plausible language. Autocomplete is a language model, for example. These models work by estimating the probability of a token or sequence of tokens occurring within a longer sequence of tokens.”

AI can be used as either a consumer tool or an enterprise-level tool. Consumer tools are used by consumers. Enterprise AI is used, in a variety of ways, to impact (and ideally improve) the performance of a business or other enterprise. AI can also be used in government activity. The US Government has a website that catalogues the government’s use of AI.

“AI Ethics” versus “Legal Ethics”:

AI has at least two touch points with the legal profession. First, people who seek legal services use AI in their day-to-day lives and, increasingly, bring claims to lawyers that involve the use or misuse of AI. These claims might touch upon ethical questions that may give rise to legal claims. In this context, lawyers are faced with questions about AI Ethics. According to IBM,

“Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes.”

Second, when lawyers encounter or use AI while doing legal work, this gives rise to certain ethical obligations (some already clearly established and others continuously evolving as AI evolves) that relate to professional responsibilities. These obligations require lawyers to ensure that their use of or interaction with AI conforms to the ethical standards that govern the legal profession, as set down by federal, state and local rules of professional conduct. In this context, when thinking about how they are obliged to approach the practice of law, lawyers engage with “legal ethics.”

AI ethics and legal ethics overlap. For example, AI chatbots can “hallucinate,” meaning they can generate results that refer to cases that do not actually exist. This raises concerns for AI Ethics: Should AI tools with significant flaws be made available to the public? If so, what precautions need to be taken in terms of how users interact with flawed AI? Hallucinations also raise questions that sound in legal ethics: Should lawyers and judges be permitted to use AI legal research tools that are likely to hallucinate? How much due diligence is required when an attorney relies on AI generated citations to cases? What are an attorney’s obligations with respect to candor when using AI? Do lawyers require consent before entering client data into AI tools?

This report focuses primarily on the first type of normative inquiry, the normative impact of AI, viewed from an economic justice lens. Touching upon legal ethics, Part III of this Report includes a list of legal ethics opinions on how lawyers can ethically use and interact with AI.

Some words used to describe common ways that AI fails:

It is well recognized that while good quality AI tools, such as AI maps or other apps, have already helped a vast majority of people, nonetheless, AI has been known to fail. “AI tools can generate content that’s skewed or misleading (Generative AI Working Group, n.d.). They’ve been shown to produce images and text that perpetuate biases related to gender, race (Nicoletti & Bass, 2023), political affiliation (Heikkilä, 2023), and more.”

AI failures can be organized into three broad categories, inaccurate content, biased content, and misleading content.

All AI relies on databases for their functioning. Databases are often nothing more than large amounts of information stored in tables. AI processes this data according to how it is programmed to create desired outputs. For example, AI maps process large stores of data, on roads, traffic patterns, information about accidents and weather events, to provide directions. AI is only as accurate, sensitive, comprehensive, or fair as the data that it relies upon and the rules it is programmed to follow to exploit this data (e.g. algorithms). When AI relies on inaccurate or incomplete data, it will generate inaccurate or incomplete results or outputs. For example, AI trained on datasets that contain only light-skinned or white faces will struggle to identify darker skinned faces.

The problem of inaccurate or biased becomes even sharper in the context of Generative AI. Unlike traditional AI, Generative AI is not limited by data. Rather, AI relies on patterns in pre-existing data to create new “data” with similar patterns. For example, OpenAI has created a generative AI tool that replicates someone’s voice after training on a fifteen-seconds sample. Tools like ChatGPT provide human-like answers to questions and drafts of documents after “training” on hundreds of gigabytes of texts sourced from across the Internet. Often, processing all this data leads to quick, useable results. Often, when processing such huge stores of data across only as many parameters as programmers (and the AI they create) can conceive, AI gets it wrong, outputting fabricated data that appears dangerously authentic. Such inaccurate content is sometimes referred to as “hallucinations.” Famously, AI has even “hallucinated” caselaw, citations, party names and all; lawyers relied on these AI hallucinations, submitted them to court and were duly sanctioned for not verifying the citations. AI Bias can be harder to crack than mere inaccuracy. Inaccurate or false data is often easy to detect, either on its face or in application. However, bias is more elusive. According to IBM:

AI bias, also referred to as machine learning bias or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.

The problem of AI or algorithm bias occurs when AI is “trained” on biased data or programmed with implicit bias. What does this look like? Imagine that someone is designing a tool for automated intake for public interest lawyers. Imagine that this programmer is not a lawyer and has never interacted with the legal system. Imagine further that this programmer has not had any experience with racism or other forms of discrimination. The programmer will probably engage in a good deal of research on the types of information that need to be collected during intake. The programmer might also be able to account for problems reported in the literature about challenges that public interest lawyers face when conducting intake with clients. However, no matter how thorough the research, the programmer might not connect the dots to reflect the realities of public interest work. They might collect and organize data around stereotypes that they have about public interest lawyers or public interest clients. The result might be that a tool designed with a white, able-bodied single mother of two in mind will be serving a far more diverse population in ways that might cause serious harm. For example, an AI tool might not recognize certain names, or it might not be able to account for the impact of a specific disability. It might not be designed to get information safely from racial minorities who are more vulnerable to being over-policed. And these are just some of the kinds the problems related to bias that can arise without any intention on the part of programmers or developers to create biased systems.

At the other end of the spectrum, there are also AI tools that deliberately spread misinformation or contain or encourage deliberate bias. “Bots” are one example of deliberate malfeasance using AI. “Bots” are automated online accounts that can be used to spread misinformation. Another example of malicious use of AI relates to the use of predatory advertising tactics to, for example, target low-income people online to connect them to predatory lending strategies. For example, the use of Google’s advertising and search algorithms to target low-income people with online ads for payday loans. Payday loans are short-term, high-interest loans, usually for small amounts, that are due your next pay day. These loans can be expensive, they can damage credit and can result in debt collection issues. Google currently has policies that have been in place since 2016 to prevent AI-enhanced predatory lending practices. However, the practice persists. Similarly, “dark or deceptive patterns” are defined by the United States Federal Trade Commission as “design practices that trick or manipulate users into making choices they would not otherwise have made and that may cause harm. AI can enhance these and other malicious and deliberate online practices.

Some other key terms we use in this report:

Economic Justice” for the purposes of this Report, economic justice requires that no citizen is denied access to basic rights, privileges and opportunities simply because they cannot afford them. The definition of economic justice is contextual. The Delaware Coalition Against Domestic Violence defines economic justice as “the idea that the economy will be more successful if it is fairer.” According to Investopedia, economic justice is “a set of moral and ethical principles for building economic institutions, where the ultimate goal is to create an opportunity for each person to establish a sufficient material foundation upon which to have a dignified, productive, and creative life.”

Legal work” refers to all work that involves the use of legal skills and knowledge that an organization performs on behalf of the low-income community it serves. It includes legal representation of individuals and groups. It also encompasses non-representational services and forms of assistance such as community legal education and the provision of legal information, pro se clinics and other forms of self-help assistance, as well as studies and reports on issues of general importance to the low-income communities served by the organization. Finally, it includes advocacy in legislative administrative and civic settings, done on behalf of clients and/or their communities.

Legal Practitioner” refers to an attorney, paralegal, law students, lay advocates or tribal advocates who represent a client of an organization and engages in representational activities authorized by federal, state, or tribal law. Where an activity generally requires a particular type of practitioner, such as an attorney, the Standards and commentary use the specific descriptive term rather than the general term “practitioner.”

Low-income and other marginalized groups” These are low-income groups (people living at or below the United States national poverty guidelines), as well as other groups who suffer from additional or other forms of economic injustice. These include groups that traditionally experience higher costs for access to basic resources (e.g., people with disabilities), groups likely to be paid less for the same work (i.e., gender-, sex-based minorities, or racial minorities), and groups more likely to be impacted by algorithm bias (e.g., racial minorities), The Literature Review also revealed that these same groups (referred to herein as low-income and other marginalized groups) face a broad variety of challenges related to the use and development of AI that economically powerful groups might not face. These include weaker oversight and testing for tools specifically directed at low-income or other marginalized groups persistent and unavoidable (or expensive to avoid) technological failures—for example, failures in access to high-speed internet or outdated computers or smartphones that cannot access the latest AI tools.

The use of generative AI has proliferated in the past year. As a general matter, this report applies to considerations of how both traditional and generative AI impact the economic margins. For example, to gain from the benefits of either traditional or generative AI, economically marginalized groups need both digital access (the ability to access AI tools which includes, for example, access to high-speed internet) and digital literacy (the ability to know how to beneficially use accessible AI tools). Similarly, the need for more nuanced analysis on the impact of AI on marginalized groups is also applicable to both traditional and generative AI. Just like traditional AI, generative AI is not a monolith, and its impacts need to be studied not just from a design perspective but also with respect to specific use cases and their impacts. For example, a pro se litigant using ChatGPT to research case law will need to take different precautions from a pro se litigant using the same tool to draft a letter. This is because ChatGPT presents different risks for its different use cases and different users. Similarly, a pro se litigant without a high school education will face different risks when using ChatGPT than a pro se litigant with a graduate degree or a GED. As such, a generalized study of ChatGPT might not provide much useful information on its impact on the economic margins. At the same time, given that we are at the beginning of implementation and use of public-facing generative AI, more time is needed before final or specific conclusions can be reached. Amid a growing access to justice crisis, with the knowledge that there are not enough lawyers to address the issues that low-income groups face, identifying effective AI tools that support access to justice is an urgent need.

Part I: AI’s Century

While the idea that machines can be programmed to act like humans is at least two centuries old, the technology behind AI has been in development for more than half a century. The first functional AI programs were created in 1951, a checkers-playing program and a chess-playing program. In 1956, John McCarthy held a workshop at Dartmouth on “artificial intelligence” marking the first use of the term AI.

Today, AI powers every aspect of the devices people use, from cell phones and computers to home thermostats, locks, and home appliances. AI permeates most day-to-day transactions from online searches and online advertising to online application portals, making key decisions for loan applications, security equipment, and background checks. AI also powers a vast number of legal tasks and functions from legal research tools such as LexisNexis and WestLaw to police investigations—and, in some foreign jurisdictions, China for example, AI has even decided cases as a part of government trials on the incorporation of AI in adjudication.

What AI can do has changed significantly since the late-1950s. In 1966, the first chatbot was invented as conversation between humans and machines. In 1986, the first significant prototype of a driverless car was created. In 1993, Visa became the first network to deploy AI-based technology for risk and fraud management, pioneering the use of AI models in payments and background checks. In 1996, an AI program beat the world’s best chess player. By the turn of the century, the Internet brought advanced AI technology into everyday life, transforming AI from a niche technology to an essential thread woven into the fabric of daily human activities. In 1998, Google launched its AI-powered search bar. In 2002, the first AI-powered vacuum cleaner hit the market. By 2005, AI was organizing emails, powering online navigation tools like Google Maps, filtering advertisements to websites, and collecting information about online activity. By 2008, AI voice recognition software had given AI ears. By 2010, Facebook’s facial recognition function was automatically identifying faces in photographs. By the end of 2020, AI was powering most devices and driving virtually all digital activities from online advertising to online dating. AI currently makes significant decisions for online mortgage loan, credit card, and job applications. It also plays a key role in security equipment, risk assessment software, and credit ratings.

When considered against AI’s rapid development over the past half-century, AI has been slower to penetrate the legal profession. In 1971, the Department of Justice launched the Justice Retrieval and Inquiry System (JURIS), an AI-powered precursor to LexisNexis and WestLaw. By the mid-1980s, the UBIQ terminal gave top law firms full-text search capabilities and provided lawyers with rapid access to information on legal research databases. In 1978, the Wang word processor enabled automated document creation and production. In 1988, the Judicial Conference of the United States applied AI search and document management capabilities to create a new service, PACER—Public Access to Court Electronic Records—that made court documents publicly available online. Thus, before the turn of the century, by and large, in the legal profession, AI was mainly used for basic legal research, document management, discovery, and case management functions.

However, by the early 2000s, AI had made significant in-roads into the legal practice. In the early 2000s, AI tools began performing a broader set of law-related functions in part because of law enforcement’s early embrace of AI tools. Since the turn of the century, law enforcement has been using AI for facial recognition, risk assessment, auditing of bodycam footage, analyzing DNA, extracting images from videos, detecting evidence in crime scene photos, and countless other tasks. In China, AI-powered “smart courts” are resolving intellectual property, e-commerce, cyberspace, financial, civil rights and product liability matters via a digital court hearing without any human oversight at the first instance and with only two percent of cases appealed to human judges. In the United States, judges already use AI-infused statistical programs in sentencing, for example, to predict chances of flight or recidivism for criminal defendants. Across the globe, advanced legal research tools have been helping many law firms conduct discovery, analyze legal briefs, and isolate the decisions and patterns for specific judges. Notably, given the monthly per user cost for some of the most advanced AI tools, the most advanced AI tools are helping, by and large, the highest earning law firms. Lawyers have been using AI to negotiate contracts. There are also reports that litigants have used AI to avoid paying parking fines.

In 2020, the stakes of AI changed drastically with public availability of generative AI tools. Generative AI refers to AI software that can generate or create text, images, and other content by studying patterns in pre-existing materials. Generative AI can imitate human writing or speaking, making it hard for someone to distinguish between materials made by a human and materials made by a machine. Generative AI can answer questions, write letters, draft affidavits, create resumes, mimic voices, create images from text descriptions, and so much more. A combination of various generative AI tools can also be used to create a deep fake—a digital creation that looks human and sounds human but is entirely computer-generated. How well it can accomplish these tasks is still a matter of debate.

All in all, AI is set to transform every imaginable facet of human life, from how people communicate to the kinds of jobs they can expect to do, even more than it has already done in the United States and across an increasingly interconnected globe. AI has already transformed the legal profession, According to the United States National Science and Technology Council, “[a]lthough it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.”

The United States has enjoyed over half a century of detailed legal discussion and debate on norms governing the use of AI, as evidenced by a veritable laundry list of guidelines, policies and laws in place that address normative concerns about the safe and lawful use of technology. To name a few laws, guidelines, and policies: Based on decades of legal debate and litigation around privacy and technology, the US Privacy Act of 1974 was passed, defining requirements for federal agencies to protect personally identifiable information (PII). In 1977, NIST issued Data Encryption Standards as the first publicly specified cryptographic algorithm that was used to “protect sensitive data for 30 years, and widely implemented to secure financial transactions and network connectivity, and contributed to the growth of e-commerce.” In 1979, NIST published Automatic Data Processing Risk Analysis Guidance, as a method for performing risk assessment based on the work of the IBM Corporation’s Robert H. Courtney, Jr. In December 2001, NIST published Advanced Encryption Standards (AES), which officially specified the AES algorithm. In 2008, Illinois passed the Biometric Information Privacy Act. Illinois was followed by Texas, which passed its biometric privacy law in 2009. In 2018, California became the first state to pass a comprehensive data privacy act that included coverage of certain AI technologies. Ever since, states have been passing a variety of laws and regulations that cover AI. California’s law followed on the heels of the European Union’s General Data Protection, which was passed two years earlier, in 2016. Currently, 137 of 194 countries have put in place legislation to secure the protection of data and privacy. More recently, some jurisdictions have passed laws that cover AI specifically. The European Union passed the AI Act this year, for example. Prior to this, China drafted and passed a suite of comprehensive regulations that aim at addressing the risks related to AI and introduce compliance obligations on entities engaged in AI-related businesses.

While civil society discussions about the impact of technology have been robust for some time in the United States, as AI started becoming woven into the fabric of everyday life, civil society discussions in the United States on the impact of AI intensified. To name a few texts: In 2014, Cynthia Dwork published a paper on the need increases for a robust, meaningful, and mathematically rigorous definition of privacy “[a]s electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data.” In 2016, Julia Angwin and ProPublica team published a study on risk scores, as part of a larger examination of the powerful, largely hidden effect of algorithms in American life, and found significant racial disparities in sentencing recommendations. That same year, Dr. Joy Buolamwini founded the Gender Shades Project inspired by her struggles with face detection systems. In 2018, Dr. Buolamwini and Dr. Timnit Gebru subsequently published the widely known Gender Shades paper. In 2018, Historian Mar Hicks published “Programmed Inequality”, a history of how Britain lost its early dominance in computing by discriminating against women. That same year, Algorithms of Oppression by Safiya Umoja Noble was published by NYU Press.

Growing scholarship around AI’s normative failures spurred activism within and against leading technology companies. In 2018, Professors Lucy Suchman and Lilly Irani were the first among hundreds of Google employees and academics who penned an open letter calling upon Google to end its work on a controversial US military effort to develop AI to analyze surveillance video, and to support an international treaty prohibiting autonomous weapons systems. The #MeToo Movement penetrated major AI companies as well, that year, with tens of thousands of Google employees worldwide participating in a women-led walkout organized in response to Google’s shielding and rewarding men who sexually harass their female co-workers. That same year, Carole Cadwalladr, exposed the Facebook-Cambridge Analytica data scandal, where personal data belonging to millions of Facebook users was collected without their consent by the British consulting firm predominantly to be used for political advertising.

In 2020, most notably, Google fired Timnit Gebru for a paper she later published in March 2021, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, co-authored with Margaret Mitchell, Emily Bender, and Angelina McMillan-Major. The paper highlighted possible risks associated with large machine learning models and suggested exploration of solutions to mitigate those risks. That same year, Abeba Birhane and Vinay Prabhu, published “Large image datasets: A pyrrhic win for computer vision?” that found massive datasets used to develop thousands of AI algorithms and systems contained racist and misogynistic labels and slurs as well as offensive images. Sasha (Alexandra) Luccioni developed "Machine Learning Emissions Calculator”, a tool for estimating the carbon impact of machine learning processes, and presented it along with related issues and challenges in the article, “Estimating the Carbon Emissions of Artificial Intelligence.” Sasha Costanza-Chock published “Design Justice: Community-Led Practices to Build the Worlds We Need”, a powerful exploration of how design might be led by marginalized communities, dismantle structural inequality, and advance collective liberation and ecological survival.

That same year, Facebook announced the launch of its very first Oversight Board. After initially disputing the findings of researchers, Amazon halted police use of its facial recognition technology. Within the same year, IBM’s first CEO of color, Arvind Krishna also announced the company was getting out of the facial recognition business. In 2021, Google fired its ethical AI team co-lead Margaret Mitchell who along with Timnit Gebru had called for more diversity among Google’s research staff and expressed concern that the company was starting to censor research critical of its products.

In 2022, Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru and Iason Gabriel published “A Human Rights-Based Approach to Responsible AI. In 2023, Hilke Schellmann co-led a Guardian investigation of AI algorithms used by social media platforms and found many of these algorithms have a gender bias, and may have been censoring and suppressing the reach of photos of women’s bodies. That same year , Authors of Stochastic Parrots responded to an “AI pause” open letter stating that, “The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Scholars credit Lina Khan’s paper, written while she was still in law school, Amazon’s Antitrust Paradox, for the turn towards a consumer and anti-trust focus that has been enhancing public trust in the AI sector, particularly because Khan, appointed soon after to head the Federal Trade Commission. Together the FTC together with the Consumer Financial Protection Bureau, succeeded in creating greater awareness about the importance of protecting basic market-based principles against predatory practices easily accessible through AI.

As relevant to this Report, while these changes, especially increased focus on consumer protection, provided trickle-down benefits for low-income and other marginalized groups, overall, economic justice remained largely ignored in government activity and in mainstream civil society discourse.

Part II: the AI & Economic Justice Report: Methodology

Since October 2021, the CRSJ’s AI and Economic Justice Working Group has been using two methods to learn more about the impact of AI on economic justice. First, the Working Group produced an internally facing literature review of relevant laws, policies, principles, norms, guidelines, and legal problems (“Literature Review”). The general findings from this literature review are shared here with key readings and materials listed in the attached annotated bibliography. The Literature Review studied the extent to which current laws, norms, and principles governing the development and application of AI address economic justice as well as the major economic justice concerns triggered by AI. Through the Literature Review, the Working Group also engaged in critical evaluation of existing literature on the topic of AI and economic justice, identifying gaps in knowledge and law, and identifying topics and questions that required further investigation.

The Literature Review was conducted with the aim of achieving representative coverage through search for relevant articles, reports, and laws from recognized institutions and organizations. Materials were screened for the authority of their source as well as for the extent to which they focused on either economic inequality, economic justice, or on issues impacting the low-income and other marginalized groups. Second, the Working Group created a survey to compare the results of the Literature Review against the experiences, understandings, and observations of lawyers in practice (“Survey”).

Defining AI for the purposes of this survey presented an early challenge. There is no single definition for AI. As noted by the National Science and Technology Council, “[e]xperts offer differing taxonomies of AI problems and solutions.” The U.S. Chamber of Commerce Technology Engagement Center warns against overly prescriptive or overly broad definitions of AI. The Center argues that given the diversity of current applications paired with an unknown innovation trajectory a legal definition should be technology neutral and sufficiently flexible to accommodate technical progress, while precise enough to provide the necessary legal certainty, and it should focus on systems that learn and adapt over time. Further the Center recommends that the definition should be accessible to individuals at different levels of understanding and should address AI’s potential impacts on the public, such as AI discrimination or violations of consumer rights. Similarly, along these lines, since it was not possible to presume that all legal practitioners (the targeted Respondents of the Survey) share the same or similar understandings of AI, it was clear that AI needed to be defined in a way that would allow it to be easily identified.

Thus, in line with this long-term approach prevalent in American literature on AI, an approach that seeks to ensure both legal certainty and access, for the purposes of the Survey, AI was described as "automated systems," and defined as a wide variety of technological systems or processes used to automate tasks, aid human decision-making, or replace human decisions altogether. Before taking the Survey, Respondents were informed that the term encompassed systems such as pre-trial risk assessments and other risk scoring systems, automated hiring or tenant screening systems, automated public benefits eligibility screening systems, automated fraud detection software, facial recognition systems, student monitoring systems, automated screening tools used for loan applications, ChatGPT and similar generative A.I. tools, and a variety of other systems that use some form of automation to aid or replace human decision-making.

Using this definition, based on questions that emerged from the Literature Review, the Working Group developed and implemented a mixed-methods survey that collected experience and perspectives from a broad cross-section of Respondents, primarily lawyers serving mostly low-income and other marginalized clients across a variety of practice settings, including solo, medium, and large law firms, non-profits, the judiciary, and state and local government. Respondents engaged in a diverse array of practice areas: consumer law, criminal justice, employment law, housing law, immigration law, education law, public benefits law, and others. Information about the Survey was shared widely across the legal profession, with the Survey targeted at legal practitioners. Respondents were asked to describe their areas of professional practice and demographic details about the populations that they serve. Based on the pre-screening, Respondents were directed to various parts of the Survey such that consumer Respondents would answer questions relevant to their current legal work. All respondents completed a general section which included quantitative and qualitative questions. Throughout the Survey, AI was not treated as a monolith. Respondents were asked about their experience and understanding of specific types of AI tools as well. They were asked separately about their knowledge of the experience and understanding of their low-income clients with specific types of AI tools. Thus, the Survey collected a variety of experiences and understandings from across a variety of practice areas and settings, about a number AI tools as they are deployed in the context of specific practice areas.

Finally, the 2023-2024 Survey focused, in the main, on traditional AI. These are the most widespread forms of AI currently in use, notwithstanding the very recent explosion of generative AI tools, such as ChatGPT, for example. While some tools covered by the survey are strictly within the realm of traditional AI (for example, binary decision-making algorithms), other tools that the survey covers, chatbots for example, implement basic generative AI technologies. More detailed review of generative AI tools will become more possible as its use continues to develop. In short, the AI space is evolving at a pace that makes any point-in-time survey incomplete.

Part III: Observations

Economic Justice is Absent from AI’s Normative Core:

The Working Group’s essential learning is that economic justice considerations are not being given consistent or substantive consideration when it comes to the development and use of AI. One study, in particular best exemplifies this finding: “Principled Artificial Intelligence,” a study by the Berkman Klein Center for Internet and Society at Harvard University. After analyzing technology policies authored over the past several years, the Berkman Klein Center identified a “normative core,” a set of core principles broadly recognized as shaping the development of AI. Notably, analysis of this normative core reveals that it does not include substantive discussion of or concern for economic justice.

Inadequate Analysis of the Impact of AI on Economic Justice:

The impact of AI varies considerably based on user, usage, and context. Rather than treat AI as a monolith, the Survey asked Respondents about their attitudes to specific AI tools and functions (e.g., fraud detections tools, chat boxes, online application tools, etc.) Respondents reported that experiencing different AI tools had different impacts, with qualitative and anecdotal responses detailing different impacts based on different use cases and different contexts. Overall, Respondents rated applicant screening systems, automated screening systems, and risk scoring systems as detrimental to low-income clients. Respondents had mixed attitudes with respect to fraud detection systems, and generative AI tools, which many Respondents found beneficial or neutral in terms of impact on low-income or other marginalized clients. The observations culled from the Survey were consistent with the literature. AI is not a monolith, and different AI tools are having different impacts based on their use as well as their users.

Lack of Safeguards to Assure Safe Design and Quality:

The Literature Review and the Survey indicate that AI tools designed for low-income and other marginalized groups suffer from risky design and are more vulnerable to technological failure. Many Respondents reported that the effectiveness and positive value of AI depended on its design, use case, and implementation, with one Respondent clarifying that “[t]he impact will depend on how well the tool is designed and how fully it is tested before going live,” commenting that it might be possible that high quality implementation will not be the case for a few years or maybe ever for systemically excluded groups. There were also concerns with a lack of adequate testing and piloting before sending tools out to the Market, especially for low-income or marginalized groups. Survey Respondents voiced consistent concerns about the negative impacts of technological failures and poorly designed AI tools, with a significant number of Respondents describing AI tools as “unhelpful, unresponsive, and inflexible.” Furthermore, a significant number of Respondents reported experiencing difficulties in resolving technical difficulties, with some Respondents describing difficulties in troubleshooting technology failures. In this context, one Respondent stated that AI systems have proven to be “serious barriers to meaningful access for a large percentage of their clients” because of such technological failures. Relatedly, concerns were raised by Respondents about the impact of access to better quality AI tools.

Failures in Digital Access

The Literature Review and the Survey point to concerns about digital exclusion resulting from inadequate access to digital infrastructure. Respondents consistently reported additional burdens for clients who do not have access to high-speed internet or a computer. For example, clients who could only access the Internet from their mobile phones could not access services or websites designed to be accessed on computers. Describing the access gap between urban and rural areas, one Respondent reported that some digital technologies are only now reaching rural areas. Another Respondent noted that “[a]mong those living in poverty, the elderly, monolingual, low-income non-English speakers and people with disabilities, access to technology is non-existent.” Language was also raised as an issue that compromised digital access, given that most online applications are only available in a few languages. In the immigration context, a Respondent pointed to language issues noting that “[i]f [AI and automated systems are] difficult for me to understand, you can imagine how difficult it would be for immigrants/refugees to understand.” Concerns about access to specialized AI was also a concern raised in the literature. An empirical study conducted by the University of California, Berkeley, noted concerns about resource disparities resulting in specialized tools being developed to overcome the shortcomings of general purpose products, when specialized products are not necessarily accessible to resource-limited lawyers and consumers. The study found that “technology often fails to be a leveler and that low-income and marginalized individuals often do not reap the full benefits of product innovation — because it is not priced or otherwise within reach, deprioritizes their unique needs, or tends to cater to the better off.” In particular, AI as caused significant access issues for disabled peoples, including access to websites, application tools, and other AI and AI-powered tools and software.

Failures in Digital Literacy

The Literature Review and the Survey indicated the importance of digital literacy for mitigating economic injustice in the contexts of AI. Survey Respondents expressed concerns about access and exclusion that stemmed from client issues with digital literacy. Respondents noted that, in general, education levels played a more significant role than income levels, with lower levels of education making access to and use of the Internet or digital tools more difficult or risky for low-income clients, potentially signaling that a limited education rather than a lower income might be a deeper cause of digital exclusion. Equally, Respondents noted that a lack of digital literacy made clients more vulnerable to risk online. Respondents reported having served clients who were not equipped to evaluate the quality of information found online and who were, therefore, more vulnerable to being tricked or defrauded online.

Failures in Transparency and Explainability:

The Literature Review and the Survey point to raised concerns about burdens caused by a general lack of transparency and explainability with respect to when AI is being used, how it is used, and how it works. This problem was seen as not only impacting low-income and otherwise marginalized clients, but as impacting the lawyers who served them, as well. One Respondent characterized this lack of transparency as “abysmal,” stating in essence that it is hard to get any information on how AI tools work and when they are used—even for lawyers and their investigators actively searching for this information.

Lack of Training for the Bar and Bench:

The Literature Review and the Survey point to a need for more education on AI across the bar and the bench. Respondents raised concerns about the legal community’s lack of familiarity with AI. Most Respondents reported that they would feel uncomfortable explaining how automated systems worked. Equally, Respondents showed uncertainty about when AI Systems are used and the underlying technology. Relatedly, Respondents also expressed concerns about a lack of adequate training that resulted in lawyers misusing AI technologies. A Respondent reported a case in the immigration context, where a lawyer used an AI tool to translate a document but did not review the AI translation, causing a client to lose the case.” Based on this observation, the Respondent observed that AI is not always the problem, “it's common sense that translating platforms are a start not an end. You always check.” When read against available literature, the Working Group understands these comments as reflecting, Literature Review reflects not only the need for lawyers to understand the limitations of technologies they use but as reflecting a need for better training on these limitations, as well.

Public Sector Leadership:

Respondents to the survey, in particular, expressed dissatisfaction with the lack of a critical approach to AI at the government level. Respondents expressed confusion about the government’s role and leadership on AI. One Respondent commented, “I don't know if the government agencies are being critical of AI, or asking the questions about inclusion, bias in the data, etc., or privacy. Most of the people in the [conversations] are not from BIPOC or low income or those who have served time. [It is a v]ery exclusionary conversation.” Some Respondents were concerned that government systems were being designed to take advantage of those with less knowledge. A lack of government training on AI was also a concern raised by Respondents working in the public sector: “no one understands how these "black box" tools work.”

See also:

Part IV: Literature Review Findings

The potential consequences of AI for humanity have been discussed as theory for two centuries. As discussed in Part I, as self-driving cars, biased algorithms, and faulty databases started causing measurable harm in the real world, normative discussions about the impact of AI developed increasingly urgent significance. Building on decades of scholarship, numerous social justice campaigns set off revolutions in how we think about the norms governing AI that forced the Tech Industry to acknowledge AI’s negative impacts on social and political inequalities. In recent years, these successful campaigns led to broad recognition of “social justice” as a valid principle underlying the design and implementation of AI.

The same cannot be said with respect to AI and principles of economic justice. Economic justice is a broad term with several different implications. For purposes of this report, economic justice requires that the development and application of AI should not reduce access to justice (and related goods and resources) for low-income and other marginalized groups. For example, AI has made it easier for companies to target online advertisements for predatory loans and other questionable financial instruments at low-income and other marginalized groups. This is a negative impact of AI on economic justice. AI allowing free access to credit scores for all would be an example of a positive impact of AI on economic justice.

So far, the subject of economic justice has not been tackled in any depth or with any seriousness when it comes to normative discussions about AI. Rather, discussions about the impact of AI on economic justice often boil down to discussions about AI’s impact on the workforce and the fear that AI will replace workers across multiple industries. Similarly, while the establishment of rights to privacy, consent, control over data, ability to restrict processing, all have positive impacts for low-income and other marginalized groups, privacy in and by itself does not address economic injustice in any tailored or deliberate manner. Many jurisdictions now prohibit “dark” or “deceptive” patterns. These are visual features of a website or other interfaces that “trick users into doing things, such as buying overpriced insurance.” While each of these topics stand to benefit low-income and other marginalized groups. None of these on-going discussions impact the development of AI itself—especially in terms of what tools get developed, for whom, and how.

The broader literature on the norms governing the development of AI shows a lack of normative concern for economic justice in the field of AI. “Principled Artificial Intelligence,” a study by the Berkman Klein Center for Internet and Society at Harvard University described major AI policy documents as converging around eight major themes that make up a normative core for a principles-based approach to AI governance. The eight themes are (1) privacy, (2) accountability, (3) safety and security, (4) transparency and explainability, (5) fairness and non-discrimination, (6) human control of technology, (7) professional responsibility, and (8) promotion of human values. The study found that these documents “suggest the earliest emergence of sectoral norms.” While each of these themes could theoretically bolster economic justice, none of them directly address the impact of AI on economic justice. This means, based on this study and overall literature on the impact of AI, that AI’s current normative core does not include substantive considerations of economic justice.

Looking at each theme in greater detail, according to the Berkman Klein study, “fairness and nondiscrimination principles” have been universally accepted and are understood as calling for AI systems to be designed and used to maximize fairness and promote inclusivity. “Promotion of human values,” adopted by 69% of documents in the dataset, requires that “the ends to which AI is devoted, and the means by which it is implemented, should correspond with our core values and generally promote humanity’s well-being.” Finally, according to the study, approximately fifty percent of policy documents included the principle of “accountability,” which addresses concerns about who will be accountable for decisions that are no longer made by humans, and the role of impact assessments to assess “technology’s impacts on the social and natural world” at “three essential stages: design (pre-deployment), monitoring (during deployment), and redress (after harm has occurred).” In addition, “accountability principles are frequently mentioned together with the principle of transparent and explainable AI, often highlighting the need for accountability as a means to gain the public’s trust in AI and dissipate fears.”

While these and other considerations developing around AI stand to enhance economic justice in certain contexts and for certain use cases, the reasonable question that the Berkman Klein study raises is whether concepts such as “fairness and nondiscrimination” or “promotion of human values” are enough to ensure that AI does not worsen pre-existing economic inequalities. The Working Group’s literature review concludes that the answer to this question is a clear “no.”

Across mainstream literature on AI policy, only three policy documents provide a direct vision of what economic justice might look like in the context of the development of AI. All three documents are from foreign sources. “Smart Dubai” is a strategic policy initiative launched by the Government of Dubai that aims to further Dubai’s transformation into a “smart city.” Smart Dubai’s policy statement accounts for the impact of disparities in digital access and literacy in economic terms. According to the policy statement, “AI should improve society, and society should be consulted in a representative fashion to inform the development of AI.” In stating that AI systems should be fair, the policy statement requires, for example, that AI systems that help detect communities in great need after a natural disaster should account for the fact that “communities where smartphone penetration is lower have less presence on social media, and so are at risk of receiving less attention.”

Similarly, the G20 AI Principles urge that “[s]takeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.”

Providing the most comprehensive vision for economic justice, the Government of Japan’s “Principles of Human-Centric AI” report touches upon ideas of inclusive and equal flourishing. The report describes AI as a tool for “a society where people can better demonstrate their various human abilities, show greater creativity, engage in challenging work, and live richer lives both physically and mentally.” The principles envision AI as bringing us closer to “an ideal in the modern world and a major challenge to create a society in which people with diverse backgrounds, values and ways of thinking can pursue their own well-being while society creates new value by flexibly embracing them.” The report also describes a “Society 5.0” where people should fully understand the advantages and disadvantages of AI, including understanding bias as a general concept and three specific kinds of bias: statistical bias, bias caused by social conditions, and bias arising from malicious intent. Society 5.0 tackles “resolving negative outcomes (such as inequality, widening disparity, social exclusion, and so on) that may result from the evolution of AI.”

Japan’s report goes into further detail, listing “Social Principles,” which require “all people to enjoy the benefits of AI and avoid creating a digital divide with so-called "information poor" or "technology poor" people left behind.” The Principle of Education/Literacy states that “In a society premised on AI, we do not desire to create disparities or divisions between people or create those who are socially disadvantaged. Therefore, policy makers and managers of businesses involved in AI must have an accurate understanding of AI, knowledge and ethics permitting appropriate use of AI in society.” The principle of fair competition states that “The use of AI should not generate a situation where wealth and social influence are unfairly biased towards certain stakeholders.” The Principle of Fairness, Accountability, and Transparency require that “people who use AI are not subject to undue discrimination with regard to personal background, or to unfair treatment in terms of human dignity.”

Japan’s “Principles of Human-Centric AI” provide the most explicit commitments to ensuring that AI does not worsen pre-existing economic injustice and inequality. Nonetheless, whether it be the Smart Dubai Principles, the G20 AI Principles, or Japan’s Social Principles of Human-Centric AI, signal the importance of economic justice for AI but do not contain specific or actionable guidance on how to ensure that AI enhances rather than undermines economic justice. While Japan’s Principles provide the most comprehensive and direct embrace of economic justice as a fundamental necessity for the healthy development of AI, they do not offer any clear framework or approaches to translating what is essentially an equitable vision for society into a real plan that could lead to better conditions for low-income and other marginalized groups with respect to AI technology.

There is a need for a more detailed approach to considering the impact of AI on economic justice. As an illustration, an important study answering the practical questions of “how to” create AI in a way that can enhance and support economic justice was conducted at the University of California, Berkely (“The Berkely Study”). This study focused on the use of AI to address the access to justice gap. It found that generative AI tools can “significantly enhance [the work of] legal professionals and narrow the justice gap, but how they are introduced matters.” The study highlighted amongst other findings that although women compromise most public interest lawyers, organic uptake of generative AI was much higher among men in the study, showing the need to address gender disparities in the use of AI. The study concluded that appropriate assistance and support resources can also improve the adoption of AI tools.

The Berkely Study is an important point of departure. It is a context sensitive and specific examination of obstacles to the adoption of AI tools by public interest lawyers. It offers observations that are complimentary to the central observation in this report, that failures in AI equity are not inherent to the potential of the technology itself. Rather, the Berkely Study touches upon the importance of effective introduction of AI tools and potential gender biases preventing or discouraging adoption by women. However, what the Berkely Study lacks remains a considerable and urgent blind spot for the Technology Sector and for all those seeking to understand, harness, regulate, or guide its development: There is no independent picture of how the AI tools themselves are impacting low-income and other marginalized groups.

Outside of a few generalized surveys, there are currently no particularized understanding of how AI is impacting low-income and other marginalized groups—from the perspective of these groups. Certainly, the introduction of AI into the legal profession stands to increase efficiency. No more drafting documents from scratch, no more pouring through thousands of pages of discovery by hand, some even threaten, potentially, no more lawyers. With heavy caseloads and little time to give clients individualized attention, AI tools present great promise when it comes to lightening the load of public interest lawyers. But how does one know whether this increased efficiency will lead to better outcomes overall for the client populations being served? Or the impact of certain automatic functions (e.g., fraud detection, authentication, risk assessment) on economic justice? Or where biases against lower-income or other economically marginalized groups risk the greatest harm. The current literature cannot provide clear answers to these questions, while advocating strongly for widespread adoption of AI tools.

More detailed studies are vital because they provide more than just a vision of the potential of AI for enhancing economic justice. More detailed studies will provide action items for government, business, and professions, alike. For example, take the isolated finding that organic uptake of AI tools by women, who make up the vast majority of public interest lawyers, is lagging. This means that AI tools that stand to enhance economic justice are not reaching economically marginalized populations because a culture has not developed around their use. This is a problem that cannot be solved by AI alone. It requires cultural, social, political, and other kinds of messaging and interventions. It is also not an intuitive or obvious finding. The Working Group’s Literature Review and Survey indicate that there are even more valuable and specific findings to be made that will ensure not adoption of AI for adoption’s sake, but adoption for targeted and thought through purposes that aimed at creating measurable benefit to low-income and other economically marginalized groups.

Part V: Sector-based Respondent Observations

Consumer Law:

Amongst those survey Respondents working on consumer law issues, a majority worked on behalf of consumers, many on behalf of nonprofits, charitable organizations, or community organizations; a small number worked on behalf of government or for-profit companies.

Respondents reported harm resulting from data collection, processing, or misuse, including issues related to online tracking, targeted advertising, or data breaches. Only a small fraction of Respondents reported experiencing no such harms, with more Respondents reporting uncertainty as to whether they or their clients have experienced AI harm in the consumer context.

Cumulatively, Respondents, as a group, reported having experienced every type of harm listed in the survey, including difficulties resolving consumer disputes, difficulties obtaining accurate information, harm from online targeted advertisements, discrimination based on profiling of their online activities, being shown lower-quality or predatory products and services, harm by data brokers' use of their data, harm as a result of data breach, difficulty correcting or removing their personal data online, and general data privacy harms. Echoing digital exclusion and access to justice concerns, in the consumer law context, the most frequently reported harm was unequal access to information or resources online.

Further, Respondents reported erroneous wage garnishments based on algorithmic or automated determination; erroneous default judgment based on data broker information used in a dataset; and bank accounts or other financial accounts closed incorrectly based on an algorithmic or automated determination. One Respondent described experiencing harm centered predominantly in housing, benefit investigations, identity fraud, and financial domestic abuse.

While many Respondents stated that they themselves, their organization, or their client had encountered issues involving deceptive or manipulative practices (often called "dark patterns"), an equal number of Respondents were unsure about what this meant.

Respondents reported knowledge of advertisements being disguised to look like independent digital content. Respondents reported problems resulting from businesses using/buying key terms or junk fees to trick consumers into sharing their data. Respondents reported issues with data collectors not allowing the deletion of accounts or data, not giving consumers free, no-fee options, keeping them in a subscription they can't afford and not affording them offline options to get help, sharing data without consent, and luring consumers to opt-in to services without a clear means of opting out. Relatedly, Respondents reported instances of AI making it difficult for consumers to cancel subscriptions or charges. Respondents also described facing issues with automated credit underwriting, automated pricing models, home valuation models, and predatory lending algorithms.

In addition, in the Consumer Law Context, Respondents reported consumers’ digital access and exclusion issues stemming from unstable electricity and Internet access. Respondents reported viewing the lack of access to high-speed internet as a significant barrier to accessing the value that AI has to offer consumers. They also reported that a lack of adequate skills to effectively use AI services was a significant barrier in the consumer context. One Respondent reported two specific problems in the consumer law context. First, much of the support services and support scripts do not factor in the needs of those with cognitive disabilities. Second, most everything is in English. Respondents reported that “automated systems assume a shared language/understanding of specific terms, such that if a person doesn't understand a question or their situation is complex, a “wrong” answer prevents them from going forward.”

Signaling access to justice concerns for consumers, several Respondents noted frustrations with being screened out of automated systems without any recourse to a human who could fix the problem, with some facing cost-barriers when it comes to moving beyond AI assistance to get specialized help from humans. One Respondent pointed to challenges posed by issues of jurisdiction, stating that clients harmed by online companies with minimal contacts in the United States can have a difficult time when seeking remedies to digital harms, as these companies are not under the jurisdiction of American courts, or the clients have contracted away their right to seek recourse in court or administrative proceedings.

Notably, no Respondents could identify any local, state, or federal government efforts in the consumer law context that are working to address potential negative impacts of AI on consumers. One Respondent cited reliance on private foundations rather than governments, stating that “legal funders are not being critical or questioning the lack of BIPOC/LGBTQ and LEP [Disabled] communities in the conversation--even when Congress funds them to fund those that serve low-income people equally. Even Legal Services Corporation funded nonprofits are not in the conversations, so no inclusion not even by proximity.”

Criminal Justice:

In the criminal justice context, while Respondents reported understanding how automated systems impacted their clients or their work, only half of the Respondents expressed feeling comfortable explaining how automated systems encountered the world. Almost all Respondents shared concerns about implicit bias in AI and automated systems in the criminal justice system. Respondents indicated that certain uses of AI posed greater risks than others. Respondents reported that digital literacy and opaque/proprietary/simplistic codes were issues for those targeted as suspects or defendants by the criminal justice system. At the same time, Respondents provided positive feedback about electronic notification systems. While Respondents were split on the use of AI for bail and sentencing decisions, more Respondents saw AI as improving criminal justice outcomes like bail and sentencing decisions. Most Respondents reported that Information about automated systems used in the criminal justice systems was not accessible to the public.

Amongst barriers criminal suspects or defendants faced when dealing with automated systems, Respondents listed no internet service or no phone, a lack of digital literacy and difficulties reading at all; difficulties or complexities using the AI tools, and opaque and proprietary codes.

Respondents also demonstrated differing attitudes to AI based on their role in the criminal justice system and their familiarity with AI tools. Concerns over the use of AI in the criminal justice system ranged from fears that AI would further dehumanize criminal suspects, defendants, and inmates, to worry that there was too much prejudice against AI tools.

Finally, in terms of strategies to handle any concerns or issues related to the systems, Respondents listed having backups for service by email (including old-fashioned certified mail), training and education on AI, and reviewing the limitations and vulnerabilities of the systems that lawyers or their clients interact with.

One Respondent pointed to the “Arnold Tool, which looks at relative risk based on a number of factors of every person who gets arrested on a felony. The Respondent said that there have been questions raised about appropriate use of the tools and that while judges have discretion to override reliance on the tool, it can be hard to strike a balance in practice, especially when it comes to concerns about risk assessment tools being based on data that is biased.

Education Law:

Survey Respondents working on issues of education law largely represented or advocated on behalf of students or parents. They reported digital access/exclusion, legal training, and harms as the major AI-related challenges in the education law context.

With respect to digital access/exclusion, Respondents cited concerns regarding lack of broadband availability for rural communities and students, flagged difficulties that AI systems present for students with physical or learning disabilities, and pointed to the need to provide students with a wider access to technology and the need to educate students about the use of automated systems.

The need for adequate continuing education for lawyers seemed most acute with respect to education law, as most Respondents in this field reported being not at all familiar with any common AI tools being used in education. They also expressed contrasting views about the value of AI in the educational setting. It is possible that a clearer picture of the impact of AI in education will likely not emerge until there is better education and awareness amongst lawyers working in this area.

Employment Law:

In the employment law context, survey Respondents working on issues of employment law worked in a variety of capacities including, plaintiff-side litigation, defendant-side litigation, the judicial system, academia, and for-profit companies.

Among these Respondents, there was a consensus that automated employment systems are, in their current form, exacerbating economic injustice. The major harms identified by Respondents were related to design and access. Respondents reported that certain AI-driven employment tools created significant access issues ranging from inability to easily access online job applications to inability to access digital work tools such as AI-chats or other productivity tools. Respondents expressed concerns that even when accessible, AI-driven tools tend to be biased, unresponsive, inflexible, and difficult to use. Respondents reported concern that minorities and marginalized groups are more likely to be overlooked or excluded by hiring tools and equally more likely to be penalized by AI-driven surveillance systems, which can be especially problematic for disabled workers. Respondents stated a variety of concerns, including pointing out concerns over how the design of user interfaces create a veneer of objectivity over subjective processes such as hiring or work evaluations.

More Respondents felt confident in their knowledge of obligations on employers and third parties with respect to assessing, evaluating, and using automated systems in the workplace and expressed comfort with advising clients or educating non-lawyers on the legal impacts of automated employment systems. At the same time, while Respondents felt that they understood how AI is used in the employment context, Respondents expressed less familiarity with state and federal laws that could help redress harms caused by AI.

With respect to tools that can be used to overcome the challenges presented by AI, the ranges of solutions relied on ranged from no tools or solutions to reliance on statutes (National Labor Relations Act (1935), Title VII of the Civil Rights Act of 1964 (Fair Housing Act), American Disability Act of 1990, The Fair Labor Standards Act of 1938 29 U.S.C. § 203, Occupational Safety and Health regulations and policies, and antidiscrimination laws more broadly).

Housing Law:

Most Survey Respondents represented housing applicants or tenants, with some representing landlords, property management companies, vendors of automated systems, government contractors, and local, state, or federal government agencies. Other Respondents described providing online tools for tenants and lawyers representing tenants and advising housing clinics, identified housing issues for referral, represented tenant associations, or represented the best interests of children in the housing contexts.

The survey indicates that traditional AI models have had a negative impact on low-income tenants. Most of these Respondents stated that they had encountered issues involving harms caused by tenant screening reports. While some stated that they were unsure whether they have encountered issues, only a small minority stated that they did not encounter any issues. Respondents reported encountering various issues involving the use of AI and automated systems in the housing context, including problems with tenant screening reports, either incomplete reports, reports containing information about eviction outcomes (which should be sealed), or reports containing factual errors.

Respondents reported problems that arose when a negative decision from a tenant screening system was made based on insufficient credit history alone. They also reported negative outcomes because of typos, wrong names, false identities, identity theft issues, social security theft issues, pay documentation issues, and automated systems that recommended eviction due to a failure in being able to properly assess risk caused by the misconduct of children.

Further, Respondents observed that structural biases in how credit is rated causes difficulties for marginalized groups; for example, merely generating a credit report can reduce one’s credit, making it harder to shop around for loans or other tools that provide important economic support to members of marginalized groups.

Concerns with digital exclusion and access to justice were also particularly sharp in this sector. One Respondent described due process issues, stating that notices by email are not helpful for a significant number of tenants, who do not have regular access to the Internet. Another Respondent described the "Black Box" effect, where a lack of transparency in AI-driven decision-making processes lead clients to suspect but not know why they are not chosen for housing. Respondents also indicated issues with digital access that related to most AI tools being available in English only, and that English being too “high register” or complex.

In terms of successful legal strategies, one Respondent highlighted the importance of advising clients to dispute background reports, particularly when eviction records have been sealed. Another stated that making reasonable accommodation requests to disregard history where disabilities were a contributing factor had also proved fruitful.

Practitioners advise that generative AI tools, which hit the mainstream in 2024, have the potential to improve the methods by which legal services organizations deliver assistance to tenants and homeowners. GenAI chatbots have proven to be effective at providing simple and direct guidance to people who seek information about getting repairs made, applying for rental assistance, and other questions about their rights. These tools are typically free or low-cost and are accessible to anyone who has a smart phone or similar mobile device.

Immigration Law:

Survey Respondents who answered questions on immigration law mostly represented immigrants or refugees, with a minority representing government agencies, or working in academia. Most Respondents expressed encountering three types of data-related harms in the immigration/refugee context. First, immigrants/refugees struggle to access or correct their own data. Second, data for the immigrants/refugees is inaccurate or incomplete. Third, data is improperly collected and provided to law enforcement or immigration officials. Respondents described access to justice issues that resulted from such data failures. For example, one Respondent described instances where immigrants were denied relief because they could not prove a record did not exist or they were blamed or profiled because their name matched someone else who did something wrong.

Respondents reported significant barriers because of technological failure, including issues with access and system downtime. Respondents described not being able to get past United States Citizenship and Immigration Services’ automated system as resulting in not being able to obtain necessary records or not being able to reschedule biometrics appointments. These technical failures led to some Respondents concluding that reliance on machines alone, at this stage, was problematic as it has resulted in making it difficult to reach humans who might help solve problems resulting from technical failures. Respondents described incorrect translations into other languages. Further, Respondents described failures resulting from a lack of technical literacy to interact with systems effectively and from racial profiling and other systemic biases. Tying together data issues and issues with technological failure, Respondents noted that because many immigrants have similar names, there have been issues of mismatching identities (which was also noted as a problem in the housing sector).

As indicated above, language was also an issue in the immigration context, with Respondents raising concerns that low-income groups are unable to communicate in English and are unfamiliar with U.S. systems—making access to AI and automated tools even more challenging or impossible.

Adequate training on AI and automated systems for the legal community was also a concern in the immigration context. Amongst Respondents in this area, while there was some familiarity with facial recognition technologies, no Respondent indicated being extremely familiar with any AI technologies.

Notably, Respondents expressed a feeling that laws and legal tools have not proven helpful in addressing issues in the immigration context, especially when it comes to issues as simple as reaching a human person in USCIS. One Respondent reported that “we have tried calling at 6 or 7 am hoping we would be more likely to be able to get connected to a human person before it became too busy.” Another Respondent highlighted the importance of framing issues clearly when dealing with USCIS and recommended strong advocacy with USCIS to correct any issues, usually by applying for a new document with the information corrected.” Such issues might not be directly related to data or technology failures. Rather, they might signal broader structural issues that require training government staff on how people are impacted by the use of technology and by technological failure.

To address AI-related failures in the immigration context, one Respondent recommended recourse to protections against discrimination in The Immigration Reform and Control Act (which does not cover undocumented people, only citizens, nationals, and authorized aliens).

With respect to efforts by local, state, or federal governments to address potential negative impacts of automated systems on immigrants or refugees, the only activity reported was community efforts in Boston to ban facial recognition software and other surveillance technology altogether.

Public Benefits:

Most survey Respondents answering questions on public benefits were supporting benefit recipients, with a small minority supporting government contractors, working with government, or consulting with government externally. Amongst this group, there was no observable consensus on the impacts of any specific AI systems. However, most Respondents disagreed with the statement that automated systems improve public benefits programs and most very strongly agreed that automated systems harm recipients of public benefits. There was also consensus that relevant agencies or offices could not easily change the automated systems they use.

With respect to barriers faced by public benefits recipients when dealing with automated systems, concerns were similar to those identified in the housing and immigration sectors. Lack of transparency was a common theme across responses with one Respondent highlighting that there is “[n]o room for explanation. Automated systems don’t allow for you to explain the information requested.” Language barriers were also mentioned, as were concerns around not providing beneficiaries with “sufficient notice (due process)” and “not enough human support.” One Respondent noted how AI failures compound, describing how design failures force clients to provide imperfect responses that result in discrepancies that trigger fraud detection systems.

Sharing general observations, one Respondent who served low-income public benefit recipients in two states (Michigan and North Carolina) listed some of the biggest issues in the public benefits arena as systems being unusable; systems being unable to update; local departments of social services offices having little to no control over how their internal systems are managed such that they cannot fix bugs or update claims easily and they do not understand how their system(s) work so they use work arounds that then harm claimants; and not having code for programs (particularly around initial eligibility screening and fraud flagging) is dangerous for claimants and prevents them from having meaningful access to these programs. Another Respondent highlighted issues with systemic bias, noting that “[t]he whole system assumes those [seeking] benefits are scoundrels and doing fraud, etc. The whole use of these tools is biased and punitive.” Another Respondent noted age-related issues, stating that elder persons not familiar with automated systems face additional barriers in accessing public benefits. Finally, another Respondent characterized the public benefits systems as “appalling,” stating that AI-drive tools “are impenetrable and confusing, responses are slow or inconsistent, it's difficult to get human assistance. They put terrible stress on already burdened clients.”

According to Respondents, laws, tools, legal remedies, and strategies to address these barriers included lawsuits; Title IV complaints; complaints to the Department of Labor; Office of Civil Rights complaints (if clients are unable to access the systems for lack of language options); manually reviewing determinations to ensure they are correct; and asking public benefits organizations to redesign their websites to better support applicants.

Procurement:

Overall, responses to questions on procurement of AI (not including the use of AI for procurement) indicated some degree of confusion with respect to the difference between legal due diligence and additional considerations for impacts of AI on low-income and other marginalized groups. For example, one survey Respondent seemed to suggest that anti-bias policies were sufficient to consider the impact that automated systems have on low-income or marginalized groups. However, economic justice is not necessarily the result of explicit bias against low-income or otherwise economically marginalized groups. Rather, it is usually the result of systemic barriers, difficulties, differences, or blind spots.

While some Respondents described comprehensive procurement procedures, there was a paucity of information on the skillset of decision makers; for example, are decision makers adequately trained to consider the impacts of specific kinds of AI and automated systems technology? Further, it was not clear whether those making procurement decisions had the subject matter capacity to evaluate risk and impacts on low-income and marginalized groups. This signals the need for more detailed guidance on procurement of AI/automated systems to avoid harm to low-income groups. At the same time, most safeguards described seemed to revolve around standard best-practices for legal compliance with general data protection regulations, which do not address impacts on low-income and marginalized groups.

With respect to useful resources, Respondents mentioned the ABA Legal Technology Buyer’s Guide contains the ABA’s Standing Committee on Legal Aid and Indigent Defense (SCLAID) tech standards, and the Washington Courts Tech Principles (Washington Supreme Court’s order for tech acquisition). Respondents also mentioned the National Institute for Standards and Technology (NIST) Framework for Trustworthy AI as a good start but commented that “it is not widely utilized or understood.” For lawyers serving low or lower-income clients, client accessibility and feedback were obvious considerations that forced them to think about the impact of automated systems on low-income or marginalized groups when making procurement decisions.

With respect to barriers facing people attempting to procure automated systems, Respondents indicated cost was a barrier to procuring secure automated systems technologies and that products providing reliable cybersecurity safeguards tended to be more expensive than less reliable products. Respondents also mentioned a lack of due diligence on the part of lawyers as a barrier to procuring reliable automated systems—commenting that rather than taking vendors at face value, lawyers should make a practice of seeking out independent opinions about the risks and liabilities of adopting certain systems.

Finally, Respondents noted the importance of procurement processes because it is a rapidly developing area that should be of increasing concern to the legal community. One Respondent urged that “[a]dvocating to the buying community to fight for trustworthiness and non-bias rather than accepting what is offered without diligent review is vital.” Another Respondent emphasized that critical decisions and decision-making roles should not be handed out to technical experts lacking in legal training. Even when technology is being deployed under contracts with third parties, those using the products (whether lawyers or clients) must work to become experts to mitigate risk.

Echoing these thoughts and highlighting a reoccurring theme of failure of AI translation tools across the legal community in all the sectors described above, another Respondent stated: “tech vendors say that AI can do translation--this is not true. Not in an effective accurate way. Legal groups buy that--and limited English proficient communities are provided ineffective/not helpful/confusing materials. They don’t test with professionals, they are all monolingual, so they don’t even know how language works outside of English, and the vendors push it--while it is not true. They waste grant/funding on those tools--thinking they are doing this great multilingual work--and what they are putting out is bad quality, bad for their brand.” The Respondent pointed out that the Department of Justice has said that machine translation is not legally sufficient to meet Title VI Limited English Proficiency standards of meaningful access. “Nevertheless, they still believe any vendor who asserts they have adequate multilingual solutions using machine translation. The level of complacency and naivety or refusal to acknowledge these problems is shocking.”

Conclusion

Ultimately, the message of this report is not just that economic justice is on the backburner when it comes to the development of AI, but more sharply, that a lack of focus on economic justice threatens the development, adoption, and proliferation of AI itself. If American businesses want to continue to dominate the AI sector, then they must go back to the basic democratic and inclusive principles that have long driven the American economy and consider the economic justice impacts of AI on low-income and other marginalized people.

The Literature Review highlighted that current ethical and policy considerations about AI do not provide adequate or adequately detailed consideration of economic justice impacts. While consideration is given to equity and fairness, these concepts are loosely defined. There are few if any discussions about how specific use cases impact low-income and other marginalized groups, in particular. Without greater attention to the impacts on low-income and other marginalized groups, including expanding basic access to the Internet for low-income and other marginalized groups and by ensuring that AI is equally accessible to differently abled groups, trust in AI will be hard won. In the face of growing income and wealth inequalities and without more thoughtful attention to the economic margins, AI (a tool with potential to realize the full rewards of diversity, enable meaningful inclusion, and strengthen democracy and the Rule of Law) risks become a weapon for undermining democratic principles.

According to the Standards Administration of China’s White Paper of AI Standardization, “Since AI is a future-shaping strategic technology, the world's developed nations are all striving for dominance in a new round of international competition, and issuing plans and policies centered around AI.” The one-hundred page report reveals China’s interest in creating AI that creates more complex intelligent systems, powers traditional industries, frees humans from monotonous labor, increases work efficiency, and reduces error rates. While these are all laudable and important goals, they are unlikely to spur either innovation or trust in AI. The European approach to AI seems to focus on a top-down strategy of reviving flagging economies and supporting specific industries, which is a risk-based approach to ensuring established rights and integrating into the global economy. Notably, most foreign policy documents do not even have economic inclusion in mind, unlike their American equivalents that, at the very least, acknowledge the potential benefits and risks of AI for low-income and other marginalized groups.

The US has been long concerned with ensuring that AI and technology benefit everyday people. This aspiration has powered American AI innovation as a steady and strong commitment to using AI to ensure a better life for all. As the tech industry rapidly globalizes in the context of an aggressive global race for AI dominance, it can be easy to forget this core drive at the foundation of not just American AI and technological innovation but as a cornerstone of American innovation and enterprise, writ large. An examination of the normative core developing around AI suggests that a commitment to economic inclusion has been mostly decentered at home and abroad. There has never been a more crucial moment to reframe AI’s normative core to ensure that genuine and thoughtful commitments to economic justice become a driving force for AI innovation and development in the United States moving forward.

Appendix I: AI Ethics & Legal Practice

As discussed in the Report’s Introduction, from a legal perspective, there are two components to AI Ethics. The first is related to the ethical qualities of an AI system, product, or tool, as it is designed or used. The second component, required for legal practice, relates to ethical use of AI systems, products, or tools when serving and advising clients.

As a general matter, basic rules of professional conduct still provide clear guidance on best practices for the use of AI tools themselves. For example, professional rules of conduct for lawyers unflaggingly impose basic duties of competence. These duties extend to emerging technology and require lawyers to keep abreast of new technologies. As a further example, Rule 3 of the ABA’s Model Rules of Professional Conduct and similar local rules urge lawyers to verify that any information generated using AI is truthful and accurate. The same rules also urge lawyers to rush to correct any information that was wrongly generated using AI, as soon as the error comes to attention. Similarly, professional conduct rules that urge supervision also urge appropriate supervision when it comes to the use of AI tools.

While state bar ethics hotlines can provide a wealth of information on current ethical rules that might apply to prospective legal conduct related to the use of AI by lawyers, ethical guidance is still evolving. Below is a list of ethics opinions on the use of AI available at the time of publication. These opinions address a variety of duties that need careful attention when using AI, (duties of confidentiality might create prohibitions against entering client information into AI tools without informed consent, rules on the creation of lawyer-client relationships might require disclaimers when using chatbots to communicate with or advise clients).

ABA Materials:

State Bar Opinions:

California

District of Columbia

Florida

Kentucky

Michigan

New Jersey

Court Decisions

Appendix II: Annotated Bibliography

The Database of AI Litigation, George Washington University, https://blogs.gwu.edu/law-eti/ai-litigation-database/, (containing information about ongoing and completed litigation involving artificial intelligence, including machine learning).

Ivey Dyson, How AI Threatens Civil Rights and Economic Opportunities, November 16, 2023, Brennan Center for Justice.

  • Summary: Well before the current interest in AI, “government agencies and companies are already employing AI systems that churn out results riddled with inaccuracies and biases that threaten civil rights, civil liberties, and economic opportunities.” “Inaccuracies or flawed designs within AI systems can also create barriers to accessing essential public benefits. In Michigan, one algorithm deployed by the state’s unemployment insurance agency wrongly flagged around 40,000 people as committing unemployment fraud, resulting in fines, denied benefits, and bankruptcy.”
  • “A letter sent this month to Congress by the Brennan Center and more than 85 other public interest organizations suggests a place to start: draw on the expertise of civil society and the communities most impacted by these technologies to come up with regulation that addresses the harms AI is already causing while also preparing for its future effects.”

Rebecca Kelly Slaughter, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, ISP Digital Future Whitepaper & YJoLT Special Publication, Yale Journal of Law and Technology (August 2021).

Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Cass R. Sunstein, Algorithms as Discrimination Detectors, 117 PROC. OF THE NAT’L ACAD. SCI. (Dec. 1, 2020), https://www.pnas.org/content/pnas/117/48/30096.full.pdf.

Matt Kasman & Jon Valant, The Opportunities and Risks of K-12 Student Placement Algorithms, BROOKINGS INST. (Feb. 28, 2019), https://www.brookings.edu/research/the-opportunities- and-risks-of-k-12-student-placement-algorithms/.

Cade Metz, London A.I. Lab Claims Breakthrough That Could Accelerate Drug Discovery, N.Y. TIMES (Nov. 30, 2020), https://www.nytimes.com/2020/11/30/technology/deepmind-ai-protein- folding.html.

Irene Dankwa-Mullan, et al., Transforming Diabetes Care Through Artificial Intelligence: The Future Is Here, 22 POPULAR HEALTH MGMT. 229, 240 (2019).

Alvaro Bedoya, The Color of Surveillance, SLATE (Jan. 18, 2016), https://slate.com/technology/2016/01/what-the-fbis-surveillance-of-martin-luther-king-says-about- modern-spying.html.

Amy Cyphert, Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students, 20 NEV. L. J. 457 (May 2020).

Clare Garvie, Alvaro Bedoya & Jonathan Frankle, The Perpetual Line-Up: Unregulated Police Face Recognition in America, GEO. L. CTR. PRIVACY & TECH. (Oct. 18, 2016), https://www.perpetuallineup.org.

Kashmir Hill, Another Arrest and Jail Time, Due to a Bad Facial Recognition Match, N.Y. TIMES (Jan. 6, 2021), https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify- jail.html.

Algorithms in the Criminal Justice System: Pre-Trial Risk Assessment Tools, ELEC. PRIVACY INFO. CTR., https://epic.org/algorithmic-transparency/crim-justice (last visited Jan. 17, 2020);

Jason Tashea, Courts Are Using AI to Sentence Criminals. That Must Stop Now, WIRED (Apr. 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

Adam S. Forman, Nathaniel M. Glasser & Christopher Lech, INSIGHT: Covid-19 May Push More Companies to Use AI as Hiring Tool, BLOOMBERG L. (May 1, 2020, 4:00 AM), https://news.bloomberglaw.com/daily-labor-report/insight-covid-19-may-push-more-companies-to- use-ai-as-hiring-tool.

Miriam Vogel, COVID-19 Could Bring Bias in AI to Pandemic Level Crisis, THRIVE GLOBAL (June 14, 2020), https://thriveglobal.com/stories/covid-19-could-bring-bias-in-ai-to-pandemic-level-crisis/.

Natasha Singer, Where Do Vaccine Doses Go, and Who Gets Them? The Algorithms Decide, N.Y. TIMES (Feb. 7, 2021), https://www.nytimes.com/2021/02/07/technology/vaccine-algorithms.html.

Eileen Guo & Karen Hao, This is the Stanford Vaccine Algorithm that Left Out Frontline Doctors, MIT TECH. REV. (Dec. 21, 2020), https://www.technologyreview.com/2020/12/21/1015303/stanford- vaccine-algorithm/.

Shoshana Zuboff, The Age of Surveillance Capitalism: the Fight for a Human Future at the New Frontier of Power (2019)

Julie E. Cohen, Between Truth and Power: the Legal Constructions of Informational Capitalism (2019)

European Parliament, Briefing: Economic Impacts of Artificial Intelligence (AI) (2019), https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf.

  • “AI has significant potential to boost economic growth and productivity, but at the same time it creates equally serious risks of job market polarization, rising inequality, structural unemployment and emergence of new undesirable industrial structures.”

Chien, Colleen V. and Kim, Miriam, Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap (March 14, 2024). UC Berkeley Public Law Research Paper Forthcoming, Loyola of Los Angeles Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4733061

  • “AI tools can significantly enhance legal professionals and narrow the justice gap, but that how they are introduced matter - though women comprise the majority of public interest lawyers, organic uptake of generative AI was much higher among men in our study. Assistance can also improve tool adoption. The participants’ positive experiences support viewing AI technologies as augmenting rather than threatening the work of lawyers. As we document, legal-aid lawyer directed technological solutions may have the greatest potential to not just marginally, but dramatically, increase service coverage, and we suggest some steps, such as exploring regulatory sandboxes and devising ways to institute voluntary certification or “seal of approval” programs verifying the quality of legal aid bots to support such generative collaborations. Along with the paper, we release a companion database of 100 helpful use cases, including prompts and outputs, provided by legal aid professionals in the trial, to support broader adoption of AI tools.”

Goldman Sachcs, Global Economics Analyst: The Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani).

  • “If generative AI delivers on its promised capabilities, the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.”

AI Commission Report, U.S. Chamber of Commerce Technology Engagement Center, (2023), https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Report_v6.pdf.

“This debate must answer several core questions: What is the government’s role in promoting the kinds of innovation that allow for learning and adaptation while leveraging core strengths of the American economy in innovation and product development? How might policymakers balance competing interests associated with AI—those of economic, societal, and quality-of-life improvements—against privacy concerns, workforce disruption, and built-in-biases associated with algorithmic decision-making? And how can Washington establish a policy and regulatory environment that will help ensure continued U.S. global AI leadership while navigating its own course between increasing regulations from Europe and competition from China’s broad-based adoption of AI?”

“Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment…. A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.” (p. 10)

The Report names five pillars for AI Regulation: Efficiency, Collegiality, Neutrality, Flexibility, and Proportionality. (p. 11)

“Use an Evidence-Based Approach: Policymakers must take action to understand the potential impact of AI on the American workforce by leveraging new data sources and advanced analytics to understand the evolving impact of AI and machine learning on the American public.” (p. 12)

European Commission, A European Strategy for Artificial Intelligence (April 23, 2021), https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf.

Government of Canada, Directive on Automated Decision-Making (April 1, 2021), https://www.tbs-sct.canada.ca/pol/doc-eng.aspx-?id=32592.

Government:

Intergovernmental Organizations:

Civil Society:

Private Sector:

Multi-Stakeholders

Other Resources:

  • Hilke Schellmann, The Algorithm: How AI decided who gets hired, monitored, promoted & fired and why we need to fight back (2024)
  • Brian Christian, The Alignment Problem: Machine Learning and Human Values (2020)
  • Chris Wiggins, Matthew L. Jones, How Data Happened: A History from the Age of Reason to the Age of Algorithms (2023)
  • Jamie Metzl, Superconvergence: How the Genetics, Biotech, and Revolutions will Transform our Lives, Work, and World (2024)
  • Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2022)
  • Michael Woodbridge, A Brief History of Artificial Intelligence
  • Possible Minds: Twenty-Five Ways of Looking at AI. Edited by John Brockman.
  • Margaret Boden, Artificial Intelligence and Natural Man (2016)
  • Michael R. Genesereth and Nils J. Nilsson, Logical Foundations of Artificial Intelligence
  • Booz, Allen, Hamilton, The Artificial Intelligence Primer.

    Author