Part II: the AI & Economic Justice Report: Methodology
Since October 2021, the CRSJ’s AI and Economic Justice Working Group has been using two methods to learn more about the impact of AI on economic justice. First, the Working Group produced an internally facing literature review of relevant laws, policies, principles, norms, guidelines, and legal problems (“Literature Review”). The general findings from this literature review are shared here with key readings and materials listed in the attached annotated bibliography. The Literature Review studied the extent to which current laws, norms, and principles governing the development and application of AI address economic justice as well as the major economic justice concerns triggered by AI. Through the Literature Review, the Working Group also engaged in critical evaluation of existing literature on the topic of AI and economic justice, identifying gaps in knowledge and law, and identifying topics and questions that required further investigation.
The Literature Review was conducted with the aim of achieving representative coverage through search for relevant articles, reports, and laws from recognized institutions and organizations. Materials were screened for the authority of their source as well as for the extent to which they focused on either economic inequality, economic justice, or on issues impacting the low-income and other marginalized groups. Second, the Working Group created a survey to compare the results of the Literature Review against the experiences, understandings, and observations of lawyers in practice (“Survey”).
Defining AI for the purposes of this survey presented an early challenge. There is no single definition for AI. As noted by the National Science and Technology Council, “[e]xperts offer differing taxonomies of AI problems and solutions.” The U.S. Chamber of Commerce Technology Engagement Center warns against overly prescriptive or overly broad definitions of AI. The Center argues that given the diversity of current applications paired with an unknown innovation trajectory a legal definition should be technology neutral and sufficiently flexible to accommodate technical progress, while precise enough to provide the necessary legal certainty, and it should focus on systems that learn and adapt over time. Further the Center recommends that the definition should be accessible to individuals at different levels of understanding and should address AI’s potential impacts on the public, such as AI discrimination or violations of consumer rights. Similarly, along these lines, since it was not possible to presume that all legal practitioners (the targeted Respondents of the Survey) share the same or similar understandings of AI, it was clear that AI needed to be defined in a way that would allow it to be easily identified.
Thus, in line with this long-term approach prevalent in American literature on AI, an approach that seeks to ensure both legal certainty and access, for the purposes of the Survey, AI was described as "automated systems," and defined as a wide variety of technological systems or processes used to automate tasks, aid human decision-making, or replace human decisions altogether. Before taking the Survey, Respondents were informed that the term encompassed systems such as pre-trial risk assessments and other risk scoring systems, automated hiring or tenant screening systems, automated public benefits eligibility screening systems, automated fraud detection software, facial recognition systems, student monitoring systems, automated screening tools used for loan applications, ChatGPT and similar generative A.I. tools, and a variety of other systems that use some form of automation to aid or replace human decision-making.
Using this definition, based on questions that emerged from the Literature Review, the Working Group developed and implemented a mixed-methods survey that collected experience and perspectives from a broad cross-section of Respondents, primarily lawyers serving mostly low-income and other marginalized clients across a variety of practice settings, including solo, medium, and large law firms, non-profits, the judiciary, and state and local government. Respondents engaged in a diverse array of practice areas: consumer law, criminal justice, employment law, housing law, immigration law, education law, public benefits law, and others. Information about the Survey was shared widely across the legal profession, with the Survey targeted at legal practitioners. Respondents were asked to describe their areas of professional practice and demographic details about the populations that they serve. Based on the pre-screening, Respondents were directed to various parts of the Survey such that consumer Respondents would answer questions relevant to their current legal work. All respondents completed a general section which included quantitative and qualitative questions. Throughout the Survey, AI was not treated as a monolith. Respondents were asked about their experience and understanding of specific types of AI tools as well. They were asked separately about their knowledge of the experience and understanding of their low-income clients with specific types of AI tools. Thus, the Survey collected a variety of experiences and understandings from across a variety of practice areas and settings, about a number AI tools as they are deployed in the context of specific practice areas.
Finally, the 2023-2024 Survey focused, in the main, on traditional AI. These are the most widespread forms of AI currently in use, notwithstanding the very recent explosion of generative AI tools, such as ChatGPT, for example. While some tools covered by the survey are strictly within the realm of traditional AI (for example, binary decision-making algorithms), other tools that the survey covers, chatbots for example, implement basic generative AI technologies. More detailed review of generative AI tools will become more possible as its use continues to develop. In short, the AI space is evolving at a pace that makes any point-in-time survey incomplete.
Part III: Observations
Economic Justice is Absent from AI’s Normative Core:
The Working Group’s essential learning is that economic justice considerations are not being given consistent or substantive consideration when it comes to the development and use of AI. One study, in particular best exemplifies this finding: “Principled Artificial Intelligence,” a study by the Berkman Klein Center for Internet and Society at Harvard University. After analyzing technology policies authored over the past several years, the Berkman Klein Center identified a “normative core,” a set of core principles broadly recognized as shaping the development of AI. Notably, analysis of this normative core reveals that it does not include substantive discussion of or concern for economic justice.
Inadequate Analysis of the Impact of AI on Economic Justice:
The impact of AI varies considerably based on user, usage, and context. Rather than treat AI as a monolith, the Survey asked Respondents about their attitudes to specific AI tools and functions (e.g., fraud detections tools, chat boxes, online application tools, etc.) Respondents reported that experiencing different AI tools had different impacts, with qualitative and anecdotal responses detailing different impacts based on different use cases and different contexts. Overall, Respondents rated applicant screening systems, automated screening systems, and risk scoring systems as detrimental to low-income clients. Respondents had mixed attitudes with respect to fraud detection systems, and generative AI tools, which many Respondents found beneficial or neutral in terms of impact on low-income or other marginalized clients. The observations culled from the Survey were consistent with the literature. AI is not a monolith, and different AI tools are having different impacts based on their use as well as their users.
Lack of Safeguards to Assure Safe Design and Quality:
The Literature Review and the Survey indicate that AI tools designed for low-income and other marginalized groups suffer from risky design and are more vulnerable to technological failure. Many Respondents reported that the effectiveness and positive value of AI depended on its design, use case, and implementation, with one Respondent clarifying that “[t]he impact will depend on how well the tool is designed and how fully it is tested before going live,” commenting that it might be possible that high quality implementation will not be the case for a few years or maybe ever for systemically excluded groups. There were also concerns with a lack of adequate testing and piloting before sending tools out to the Market, especially for low-income or marginalized groups. Survey Respondents voiced consistent concerns about the negative impacts of technological failures and poorly designed AI tools, with a significant number of Respondents describing AI tools as “unhelpful, unresponsive, and inflexible.” Furthermore, a significant number of Respondents reported experiencing difficulties in resolving technical difficulties, with some Respondents describing difficulties in troubleshooting technology failures. In this context, one Respondent stated that AI systems have proven to be “serious barriers to meaningful access for a large percentage of their clients” because of such technological failures. Relatedly, concerns were raised by Respondents about the impact of access to better quality AI tools.
Failures in Digital Access
The Literature Review and the Survey point to concerns about digital exclusion resulting from inadequate access to digital infrastructure. Respondents consistently reported additional burdens for clients who do not have access to high-speed internet or a computer. For example, clients who could only access the Internet from their mobile phones could not access services or websites designed to be accessed on computers. Describing the access gap between urban and rural areas, one Respondent reported that some digital technologies are only now reaching rural areas. Another Respondent noted that “[a]mong those living in poverty, the elderly, monolingual, low-income non-English speakers and people with disabilities, access to technology is non-existent.” Language was also raised as an issue that compromised digital access, given that most online applications are only available in a few languages. In the immigration context, a Respondent pointed to language issues noting that “[i]f [AI and automated systems are] difficult for me to understand, you can imagine how difficult it would be for immigrants/refugees to understand.” Concerns about access to specialized AI was also a concern raised in the literature. An empirical study conducted by the University of California, Berkeley, noted concerns about resource disparities resulting in specialized tools being developed to overcome the shortcomings of general purpose products, when specialized products are not necessarily accessible to resource-limited lawyers and consumers. The study found that “technology often fails to be a leveler and that low-income and marginalized individuals often do not reap the full benefits of product innovation — because it is not priced or otherwise within reach, deprioritizes their unique needs, or tends to cater to the better off.” In particular, AI as caused significant access issues for disabled peoples, including access to websites, application tools, and other AI and AI-powered tools and software.
Failures in Digital Literacy
The Literature Review and the Survey indicated the importance of digital literacy for mitigating economic injustice in the contexts of AI. Survey Respondents expressed concerns about access and exclusion that stemmed from client issues with digital literacy. Respondents noted that, in general, education levels played a more significant role than income levels, with lower levels of education making access to and use of the Internet or digital tools more difficult or risky for low-income clients, potentially signaling that a limited education rather than a lower income might be a deeper cause of digital exclusion. Equally, Respondents noted that a lack of digital literacy made clients more vulnerable to risk online. Respondents reported having served clients who were not equipped to evaluate the quality of information found online and who were, therefore, more vulnerable to being tricked or defrauded online.
Failures in Transparency and Explainability:
The Literature Review and the Survey point to raised concerns about burdens caused by a general lack of transparency and explainability with respect to when AI is being used, how it is used, and how it works. This problem was seen as not only impacting low-income and otherwise marginalized clients, but as impacting the lawyers who served them, as well. One Respondent characterized this lack of transparency as “abysmal,” stating in essence that it is hard to get any information on how AI tools work and when they are used—even for lawyers and their investigators actively searching for this information.
Lack of Training for the Bar and Bench:
The Literature Review and the Survey point to a need for more education on AI across the bar and the bench. Respondents raised concerns about the legal community’s lack of familiarity with AI. Most Respondents reported that they would feel uncomfortable explaining how automated systems worked. Equally, Respondents showed uncertainty about when AI Systems are used and the underlying technology. Relatedly, Respondents also expressed concerns about a lack of adequate training that resulted in lawyers misusing AI technologies. A Respondent reported a case in the immigration context, where a lawyer used an AI tool to translate a document but did not review the AI translation, causing a client to lose the case.” Based on this observation, the Respondent observed that AI is not always the problem, “it's common sense that translating platforms are a start not an end. You always check.” When read against available literature, the Working Group understands these comments as reflecting, Literature Review reflects not only the need for lawyers to understand the limitations of technologies they use but as reflecting a need for better training on these limitations, as well.
Public Sector Leadership:
Respondents to the survey, in particular, expressed dissatisfaction with the lack of a critical approach to AI at the government level. Respondents expressed confusion about the government’s role and leadership on AI. One Respondent commented, “I don't know if the government agencies are being critical of AI, or asking the questions about inclusion, bias in the data, etc., or privacy. Most of the people in the [conversations] are not from BIPOC or low income or those who have served time. [It is a v]ery exclusionary conversation.” Some Respondents were concerned that government systems were being designed to take advantage of those with less knowledge. A lack of government training on AI was also a concern raised by Respondents working in the public sector: “no one understands how these "black box" tools work.”
See also:
- Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482
- Chien, Colleen V. and Kim, Miriam, Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap (March 14, 2024). UC Berkeley Public Law Research Paper Forthcoming, Loyola of Los Angeles Law Review, Forthcoming, Available at SSRN.
- Legal Services Corporation, The Justice Gap: The Unmet Civil Legal Needs of Low-Income Americans, April 2022.
- Kanu, Hassan, “Artificial intelligence poised to hinder, not help, access to justice, Reuters (April 25. 2023)
- Jena McGill, Amy Salyzyn, Suzanne Bouclin, Karin Galldin, Emerging Technological Solutions to Access to Justice Problems: Opportunities and Risks of Mobile and Web-based Apps, Oct. 13, 2016, University of Ottowa
- Washington State Courts, Access to Justice Technology Principles
- Amy J. Schmitz, Measuring “Access to Justice” in the Rush to Digitize, 88 Fordham L. Rev. 2381 (2020).
- Rowena Rodrigues, Legal and human rights issues of AI: Gaps, challenges, and vulnerabilities
- Elizabeth Anderson, What is People-Centered Justice, World Justice Project, May 18, 2023, worldjusticeproject.org/news/what-people-centered-justice?gad_source=1&gclid=CjwKCAiA8sauBhB3EiwAruTRJj9WPKjdAjvUCZ_iBNAVNqxjlsU2sCM-v9Lp-J3c9JjsSKHJYUk3HRoChaQQAvD_BwE
- Lisa Tooney, Monique Moore, Katelane Dart, Dan Toohey, Meeting the Access to Civil Justice Challenge: Digital Inclusion, Algorithmic Justice, and Human Centered Design.
- World Economic Forum, Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms (August 2023),
Part IV: Literature Review Findings
The potential consequences of AI for humanity have been discussed as theory for two centuries. As discussed in Part I, as self-driving cars, biased algorithms, and faulty databases started causing measurable harm in the real world, normative discussions about the impact of AI developed increasingly urgent significance. Building on decades of scholarship, numerous social justice campaigns set off revolutions in how we think about the norms governing AI that forced the Tech Industry to acknowledge AI’s negative impacts on social and political inequalities. In recent years, these successful campaigns led to broad recognition of “social justice” as a valid principle underlying the design and implementation of AI.
The same cannot be said with respect to AI and principles of economic justice. Economic justice is a broad term with several different implications. For purposes of this report, economic justice requires that the development and application of AI should not reduce access to justice (and related goods and resources) for low-income and other marginalized groups. For example, AI has made it easier for companies to target online advertisements for predatory loans and other questionable financial instruments at low-income and other marginalized groups. This is a negative impact of AI on economic justice. AI allowing free access to credit scores for all would be an example of a positive impact of AI on economic justice.
So far, the subject of economic justice has not been tackled in any depth or with any seriousness when it comes to normative discussions about AI. Rather, discussions about the impact of AI on economic justice often boil down to discussions about AI’s impact on the workforce and the fear that AI will replace workers across multiple industries. Similarly, while the establishment of rights to privacy, consent, control over data, ability to restrict processing, all have positive impacts for low-income and other marginalized groups, privacy in and by itself does not address economic injustice in any tailored or deliberate manner. Many jurisdictions now prohibit “dark” or “deceptive” patterns. These are visual features of a website or other interfaces that “trick users into doing things, such as buying overpriced insurance.” While each of these topics stand to benefit low-income and other marginalized groups. None of these on-going discussions impact the development of AI itself—especially in terms of what tools get developed, for whom, and how.
The broader literature on the norms governing the development of AI shows a lack of normative concern for economic justice in the field of AI. “Principled Artificial Intelligence,” a study by the Berkman Klein Center for Internet and Society at Harvard University described major AI policy documents as converging around eight major themes that make up a normative core for a principles-based approach to AI governance. The eight themes are (1) privacy, (2) accountability, (3) safety and security, (4) transparency and explainability, (5) fairness and non-discrimination, (6) human control of technology, (7) professional responsibility, and (8) promotion of human values. The study found that these documents “suggest the earliest emergence of sectoral norms.” While each of these themes could theoretically bolster economic justice, none of them directly address the impact of AI on economic justice. This means, based on this study and overall literature on the impact of AI, that AI’s current normative core does not include substantive considerations of economic justice.
Looking at each theme in greater detail, according to the Berkman Klein study, “fairness and nondiscrimination principles” have been universally accepted and are understood as calling for AI systems to be designed and used to maximize fairness and promote inclusivity. “Promotion of human values,” adopted by 69% of documents in the dataset, requires that “the ends to which AI is devoted, and the means by which it is implemented, should correspond with our core values and generally promote humanity’s well-being.” Finally, according to the study, approximately fifty percent of policy documents included the principle of “accountability,” which addresses concerns about who will be accountable for decisions that are no longer made by humans, and the role of impact assessments to assess “technology’s impacts on the social and natural world” at “three essential stages: design (pre-deployment), monitoring (during deployment), and redress (after harm has occurred).” In addition, “accountability principles are frequently mentioned together with the principle of transparent and explainable AI, often highlighting the need for accountability as a means to gain the public’s trust in AI and dissipate fears.”
While these and other considerations developing around AI stand to enhance economic justice in certain contexts and for certain use cases, the reasonable question that the Berkman Klein study raises is whether concepts such as “fairness and nondiscrimination” or “promotion of human values” are enough to ensure that AI does not worsen pre-existing economic inequalities. The Working Group’s literature review concludes that the answer to this question is a clear “no.”
Across mainstream literature on AI policy, only three policy documents provide a direct vision of what economic justice might look like in the context of the development of AI. All three documents are from foreign sources. “Smart Dubai” is a strategic policy initiative launched by the Government of Dubai that aims to further Dubai’s transformation into a “smart city.” Smart Dubai’s policy statement accounts for the impact of disparities in digital access and literacy in economic terms. According to the policy statement, “AI should improve society, and society should be consulted in a representative fashion to inform the development of AI.” In stating that AI systems should be fair, the policy statement requires, for example, that AI systems that help detect communities in great need after a natural disaster should account for the fact that “communities where smartphone penetration is lower have less presence on social media, and so are at risk of receiving less attention.”
Similarly, the G20 AI Principles urge that “[s]takeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.”
Providing the most comprehensive vision for economic justice, the Government of Japan’s “Principles of Human-Centric AI” report touches upon ideas of inclusive and equal flourishing. The report describes AI as a tool for “a society where people can better demonstrate their various human abilities, show greater creativity, engage in challenging work, and live richer lives both physically and mentally.” The principles envision AI as bringing us closer to “an ideal in the modern world and a major challenge to create a society in which people with diverse backgrounds, values and ways of thinking can pursue their own well-being while society creates new value by flexibly embracing them.” The report also describes a “Society 5.0” where people should fully understand the advantages and disadvantages of AI, including understanding bias as a general concept and three specific kinds of bias: statistical bias, bias caused by social conditions, and bias arising from malicious intent. Society 5.0 tackles “resolving negative outcomes (such as inequality, widening disparity, social exclusion, and so on) that may result from the evolution of AI.”
Japan’s report goes into further detail, listing “Social Principles,” which require “all people to enjoy the benefits of AI and avoid creating a digital divide with so-called "information poor" or "technology poor" people left behind.” The Principle of Education/Literacy states that “In a society premised on AI, we do not desire to create disparities or divisions between people or create those who are socially disadvantaged. Therefore, policy makers and managers of businesses involved in AI must have an accurate understanding of AI, knowledge and ethics permitting appropriate use of AI in society.” The principle of fair competition states that “The use of AI should not generate a situation where wealth and social influence are unfairly biased towards certain stakeholders.” The Principle of Fairness, Accountability, and Transparency require that “people who use AI are not subject to undue discrimination with regard to personal background, or to unfair treatment in terms of human dignity.”
Japan’s “Principles of Human-Centric AI” provide the most explicit commitments to ensuring that AI does not worsen pre-existing economic injustice and inequality. Nonetheless, whether it be the Smart Dubai Principles, the G20 AI Principles, or Japan’s Social Principles of Human-Centric AI, signal the importance of economic justice for AI but do not contain specific or actionable guidance on how to ensure that AI enhances rather than undermines economic justice. While Japan’s Principles provide the most comprehensive and direct embrace of economic justice as a fundamental necessity for the healthy development of AI, they do not offer any clear framework or approaches to translating what is essentially an equitable vision for society into a real plan that could lead to better conditions for low-income and other marginalized groups with respect to AI technology.
There is a need for a more detailed approach to considering the impact of AI on economic justice. As an illustration, an important study answering the practical questions of “how to” create AI in a way that can enhance and support economic justice was conducted at the University of California, Berkely (“The Berkely Study”). This study focused on the use of AI to address the access to justice gap. It found that generative AI tools can “significantly enhance [the work of] legal professionals and narrow the justice gap, but how they are introduced matters.” The study highlighted amongst other findings that although women compromise most public interest lawyers, organic uptake of generative AI was much higher among men in the study, showing the need to address gender disparities in the use of AI. The study concluded that appropriate assistance and support resources can also improve the adoption of AI tools.
The Berkely Study is an important point of departure. It is a context sensitive and specific examination of obstacles to the adoption of AI tools by public interest lawyers. It offers observations that are complimentary to the central observation in this report, that failures in AI equity are not inherent to the potential of the technology itself. Rather, the Berkely Study touches upon the importance of effective introduction of AI tools and potential gender biases preventing or discouraging adoption by women. However, what the Berkely Study lacks remains a considerable and urgent blind spot for the Technology Sector and for all those seeking to understand, harness, regulate, or guide its development: There is no independent picture of how the AI tools themselves are impacting low-income and other marginalized groups.
Outside of a few generalized surveys, there are currently no particularized understanding of how AI is impacting low-income and other marginalized groups—from the perspective of these groups. Certainly, the introduction of AI into the legal profession stands to increase efficiency. No more drafting documents from scratch, no more pouring through thousands of pages of discovery by hand, some even threaten, potentially, no more lawyers. With heavy caseloads and little time to give clients individualized attention, AI tools present great promise when it comes to lightening the load of public interest lawyers. But how does one know whether this increased efficiency will lead to better outcomes overall for the client populations being served? Or the impact of certain automatic functions (e.g., fraud detection, authentication, risk assessment) on economic justice? Or where biases against lower-income or other economically marginalized groups risk the greatest harm. The current literature cannot provide clear answers to these questions, while advocating strongly for widespread adoption of AI tools.
More detailed studies are vital because they provide more than just a vision of the potential of AI for enhancing economic justice. More detailed studies will provide action items for government, business, and professions, alike. For example, take the isolated finding that organic uptake of AI tools by women, who make up the vast majority of public interest lawyers, is lagging. This means that AI tools that stand to enhance economic justice are not reaching economically marginalized populations because a culture has not developed around their use. This is a problem that cannot be solved by AI alone. It requires cultural, social, political, and other kinds of messaging and interventions. It is also not an intuitive or obvious finding. The Working Group’s Literature Review and Survey indicate that there are even more valuable and specific findings to be made that will ensure not adoption of AI for adoption’s sake, but adoption for targeted and thought through purposes that aimed at creating measurable benefit to low-income and other economically marginalized groups.
Part V: Sector-based Respondent Observations
Consumer Law:
Amongst those survey Respondents working on consumer law issues, a majority worked on behalf of consumers, many on behalf of nonprofits, charitable organizations, or community organizations; a small number worked on behalf of government or for-profit companies.
Respondents reported harm resulting from data collection, processing, or misuse, including issues related to online tracking, targeted advertising, or data breaches. Only a small fraction of Respondents reported experiencing no such harms, with more Respondents reporting uncertainty as to whether they or their clients have experienced AI harm in the consumer context.
Cumulatively, Respondents, as a group, reported having experienced every type of harm listed in the survey, including difficulties resolving consumer disputes, difficulties obtaining accurate information, harm from online targeted advertisements, discrimination based on profiling of their online activities, being shown lower-quality or predatory products and services, harm by data brokers' use of their data, harm as a result of data breach, difficulty correcting or removing their personal data online, and general data privacy harms. Echoing digital exclusion and access to justice concerns, in the consumer law context, the most frequently reported harm was unequal access to information or resources online.
Further, Respondents reported erroneous wage garnishments based on algorithmic or automated determination; erroneous default judgment based on data broker information used in a dataset; and bank accounts or other financial accounts closed incorrectly based on an algorithmic or automated determination. One Respondent described experiencing harm centered predominantly in housing, benefit investigations, identity fraud, and financial domestic abuse.
While many Respondents stated that they themselves, their organization, or their client had encountered issues involving deceptive or manipulative practices (often called "dark patterns"), an equal number of Respondents were unsure about what this meant.
Respondents reported knowledge of advertisements being disguised to look like independent digital content. Respondents reported problems resulting from businesses using/buying key terms or junk fees to trick consumers into sharing their data. Respondents reported issues with data collectors not allowing the deletion of accounts or data, not giving consumers free, no-fee options, keeping them in a subscription they can't afford and not affording them offline options to get help, sharing data without consent, and luring consumers to opt-in to services without a clear means of opting out. Relatedly, Respondents reported instances of AI making it difficult for consumers to cancel subscriptions or charges. Respondents also described facing issues with automated credit underwriting, automated pricing models, home valuation models, and predatory lending algorithms.
In addition, in the Consumer Law Context, Respondents reported consumers’ digital access and exclusion issues stemming from unstable electricity and Internet access. Respondents reported viewing the lack of access to high-speed internet as a significant barrier to accessing the value that AI has to offer consumers. They also reported that a lack of adequate skills to effectively use AI services was a significant barrier in the consumer context. One Respondent reported two specific problems in the consumer law context. First, much of the support services and support scripts do not factor in the needs of those with cognitive disabilities. Second, most everything is in English. Respondents reported that “automated systems assume a shared language/understanding of specific terms, such that if a person doesn't understand a question or their situation is complex, a “wrong” answer prevents them from going forward.”
Signaling access to justice concerns for consumers, several Respondents noted frustrations with being screened out of automated systems without any recourse to a human who could fix the problem, with some facing cost-barriers when it comes to moving beyond AI assistance to get specialized help from humans. One Respondent pointed to challenges posed by issues of jurisdiction, stating that clients harmed by online companies with minimal contacts in the United States can have a difficult time when seeking remedies to digital harms, as these companies are not under the jurisdiction of American courts, or the clients have contracted away their right to seek recourse in court or administrative proceedings.
Notably, no Respondents could identify any local, state, or federal government efforts in the consumer law context that are working to address potential negative impacts of AI on consumers. One Respondent cited reliance on private foundations rather than governments, stating that “legal funders are not being critical or questioning the lack of BIPOC/LGBTQ and LEP [Disabled] communities in the conversation--even when Congress funds them to fund those that serve low-income people equally. Even Legal Services Corporation funded nonprofits are not in the conversations, so no inclusion not even by proximity.”
Criminal Justice:
In the criminal justice context, while Respondents reported understanding how automated systems impacted their clients or their work, only half of the Respondents expressed feeling comfortable explaining how automated systems encountered the world. Almost all Respondents shared concerns about implicit bias in AI and automated systems in the criminal justice system. Respondents indicated that certain uses of AI posed greater risks than others. Respondents reported that digital literacy and opaque/proprietary/simplistic codes were issues for those targeted as suspects or defendants by the criminal justice system. At the same time, Respondents provided positive feedback about electronic notification systems. While Respondents were split on the use of AI for bail and sentencing decisions, more Respondents saw AI as improving criminal justice outcomes like bail and sentencing decisions. Most Respondents reported that Information about automated systems used in the criminal justice systems was not accessible to the public.
Amongst barriers criminal suspects or defendants faced when dealing with automated systems, Respondents listed no internet service or no phone, a lack of digital literacy and difficulties reading at all; difficulties or complexities using the AI tools, and opaque and proprietary codes.
Respondents also demonstrated differing attitudes to AI based on their role in the criminal justice system and their familiarity with AI tools. Concerns over the use of AI in the criminal justice system ranged from fears that AI would further dehumanize criminal suspects, defendants, and inmates, to worry that there was too much prejudice against AI tools.
Finally, in terms of strategies to handle any concerns or issues related to the systems, Respondents listed having backups for service by email (including old-fashioned certified mail), training and education on AI, and reviewing the limitations and vulnerabilities of the systems that lawyers or their clients interact with.
One Respondent pointed to the “Arnold Tool, which looks at relative risk based on a number of factors of every person who gets arrested on a felony. The Respondent said that there have been questions raised about appropriate use of the tools and that while judges have discretion to override reliance on the tool, it can be hard to strike a balance in practice, especially when it comes to concerns about risk assessment tools being based on data that is biased.
Education Law:
Survey Respondents working on issues of education law largely represented or advocated on behalf of students or parents. They reported digital access/exclusion, legal training, and harms as the major AI-related challenges in the education law context.
With respect to digital access/exclusion, Respondents cited concerns regarding lack of broadband availability for rural communities and students, flagged difficulties that AI systems present for students with physical or learning disabilities, and pointed to the need to provide students with a wider access to technology and the need to educate students about the use of automated systems.
The need for adequate continuing education for lawyers seemed most acute with respect to education law, as most Respondents in this field reported being not at all familiar with any common AI tools being used in education. They also expressed contrasting views about the value of AI in the educational setting. It is possible that a clearer picture of the impact of AI in education will likely not emerge until there is better education and awareness amongst lawyers working in this area.
Employment Law:
In the employment law context, survey Respondents working on issues of employment law worked in a variety of capacities including, plaintiff-side litigation, defendant-side litigation, the judicial system, academia, and for-profit companies.
Among these Respondents, there was a consensus that automated employment systems are, in their current form, exacerbating economic injustice. The major harms identified by Respondents were related to design and access. Respondents reported that certain AI-driven employment tools created significant access issues ranging from inability to easily access online job applications to inability to access digital work tools such as AI-chats or other productivity tools. Respondents expressed concerns that even when accessible, AI-driven tools tend to be biased, unresponsive, inflexible, and difficult to use. Respondents reported concern that minorities and marginalized groups are more likely to be overlooked or excluded by hiring tools and equally more likely to be penalized by AI-driven surveillance systems, which can be especially problematic for disabled workers. Respondents stated a variety of concerns, including pointing out concerns over how the design of user interfaces create a veneer of objectivity over subjective processes such as hiring or work evaluations.
More Respondents felt confident in their knowledge of obligations on employers and third parties with respect to assessing, evaluating, and using automated systems in the workplace and expressed comfort with advising clients or educating non-lawyers on the legal impacts of automated employment systems. At the same time, while Respondents felt that they understood how AI is used in the employment context, Respondents expressed less familiarity with state and federal laws that could help redress harms caused by AI.
With respect to tools that can be used to overcome the challenges presented by AI, the ranges of solutions relied on ranged from no tools or solutions to reliance on statutes (National Labor Relations Act (1935), Title VII of the Civil Rights Act of 1964 (Fair Housing Act), American Disability Act of 1990, The Fair Labor Standards Act of 1938 29 U.S.C. § 203, Occupational Safety and Health regulations and policies, and antidiscrimination laws more broadly).
Housing Law:
Most Survey Respondents represented housing applicants or tenants, with some representing landlords, property management companies, vendors of automated systems, government contractors, and local, state, or federal government agencies. Other Respondents described providing online tools for tenants and lawyers representing tenants and advising housing clinics, identified housing issues for referral, represented tenant associations, or represented the best interests of children in the housing contexts.
The survey indicates that traditional AI models have had a negative impact on low-income tenants. Most of these Respondents stated that they had encountered issues involving harms caused by tenant screening reports. While some stated that they were unsure whether they have encountered issues, only a small minority stated that they did not encounter any issues. Respondents reported encountering various issues involving the use of AI and automated systems in the housing context, including problems with tenant screening reports, either incomplete reports, reports containing information about eviction outcomes (which should be sealed), or reports containing factual errors.
Respondents reported problems that arose when a negative decision from a tenant screening system was made based on insufficient credit history alone. They also reported negative outcomes because of typos, wrong names, false identities, identity theft issues, social security theft issues, pay documentation issues, and automated systems that recommended eviction due to a failure in being able to properly assess risk caused by the misconduct of children.
Further, Respondents observed that structural biases in how credit is rated causes difficulties for marginalized groups; for example, merely generating a credit report can reduce one’s credit, making it harder to shop around for loans or other tools that provide important economic support to members of marginalized groups.
Concerns with digital exclusion and access to justice were also particularly sharp in this sector. One Respondent described due process issues, stating that notices by email are not helpful for a significant number of tenants, who do not have regular access to the Internet. Another Respondent described the "Black Box" effect, where a lack of transparency in AI-driven decision-making processes lead clients to suspect but not know why they are not chosen for housing. Respondents also indicated issues with digital access that related to most AI tools being available in English only, and that English being too “high register” or complex.
In terms of successful legal strategies, one Respondent highlighted the importance of advising clients to dispute background reports, particularly when eviction records have been sealed. Another stated that making reasonable accommodation requests to disregard history where disabilities were a contributing factor had also proved fruitful.
Practitioners advise that generative AI tools, which hit the mainstream in 2024, have the potential to improve the methods by which legal services organizations deliver assistance to tenants and homeowners. GenAI chatbots have proven to be effective at providing simple and direct guidance to people who seek information about getting repairs made, applying for rental assistance, and other questions about their rights. These tools are typically free or low-cost and are accessible to anyone who has a smart phone or similar mobile device.
Immigration Law:
Survey Respondents who answered questions on immigration law mostly represented immigrants or refugees, with a minority representing government agencies, or working in academia. Most Respondents expressed encountering three types of data-related harms in the immigration/refugee context. First, immigrants/refugees struggle to access or correct their own data. Second, data for the immigrants/refugees is inaccurate or incomplete. Third, data is improperly collected and provided to law enforcement or immigration officials. Respondents described access to justice issues that resulted from such data failures. For example, one Respondent described instances where immigrants were denied relief because they could not prove a record did not exist or they were blamed or profiled because their name matched someone else who did something wrong.
Respondents reported significant barriers because of technological failure, including issues with access and system downtime. Respondents described not being able to get past United States Citizenship and Immigration Services’ automated system as resulting in not being able to obtain necessary records or not being able to reschedule biometrics appointments. These technical failures led to some Respondents concluding that reliance on machines alone, at this stage, was problematic as it has resulted in making it difficult to reach humans who might help solve problems resulting from technical failures. Respondents described incorrect translations into other languages. Further, Respondents described failures resulting from a lack of technical literacy to interact with systems effectively and from racial profiling and other systemic biases. Tying together data issues and issues with technological failure, Respondents noted that because many immigrants have similar names, there have been issues of mismatching identities (which was also noted as a problem in the housing sector).
As indicated above, language was also an issue in the immigration context, with Respondents raising concerns that low-income groups are unable to communicate in English and are unfamiliar with U.S. systems—making access to AI and automated tools even more challenging or impossible.
Adequate training on AI and automated systems for the legal community was also a concern in the immigration context. Amongst Respondents in this area, while there was some familiarity with facial recognition technologies, no Respondent indicated being extremely familiar with any AI technologies.
Notably, Respondents expressed a feeling that laws and legal tools have not proven helpful in addressing issues in the immigration context, especially when it comes to issues as simple as reaching a human person in USCIS. One Respondent reported that “we have tried calling at 6 or 7 am hoping we would be more likely to be able to get connected to a human person before it became too busy.” Another Respondent highlighted the importance of framing issues clearly when dealing with USCIS and recommended strong advocacy with USCIS to correct any issues, usually by applying for a new document with the information corrected.” Such issues might not be directly related to data or technology failures. Rather, they might signal broader structural issues that require training government staff on how people are impacted by the use of technology and by technological failure.
To address AI-related failures in the immigration context, one Respondent recommended recourse to protections against discrimination in The Immigration Reform and Control Act (which does not cover undocumented people, only citizens, nationals, and authorized aliens).
With respect to efforts by local, state, or federal governments to address potential negative impacts of automated systems on immigrants or refugees, the only activity reported was community efforts in Boston to ban facial recognition software and other surveillance technology altogether.
Public Benefits:
Most survey Respondents answering questions on public benefits were supporting benefit recipients, with a small minority supporting government contractors, working with government, or consulting with government externally. Amongst this group, there was no observable consensus on the impacts of any specific AI systems. However, most Respondents disagreed with the statement that automated systems improve public benefits programs and most very strongly agreed that automated systems harm recipients of public benefits. There was also consensus that relevant agencies or offices could not easily change the automated systems they use.
With respect to barriers faced by public benefits recipients when dealing with automated systems, concerns were similar to those identified in the housing and immigration sectors. Lack of transparency was a common theme across responses with one Respondent highlighting that there is “[n]o room for explanation. Automated systems don’t allow for you to explain the information requested.” Language barriers were also mentioned, as were concerns around not providing beneficiaries with “sufficient notice (due process)” and “not enough human support.” One Respondent noted how AI failures compound, describing how design failures force clients to provide imperfect responses that result in discrepancies that trigger fraud detection systems.
Sharing general observations, one Respondent who served low-income public benefit recipients in two states (Michigan and North Carolina) listed some of the biggest issues in the public benefits arena as systems being unusable; systems being unable to update; local departments of social services offices having little to no control over how their internal systems are managed such that they cannot fix bugs or update claims easily and they do not understand how their system(s) work so they use work arounds that then harm claimants; and not having code for programs (particularly around initial eligibility screening and fraud flagging) is dangerous for claimants and prevents them from having meaningful access to these programs. Another Respondent highlighted issues with systemic bias, noting that “[t]he whole system assumes those [seeking] benefits are scoundrels and doing fraud, etc. The whole use of these tools is biased and punitive.” Another Respondent noted age-related issues, stating that elder persons not familiar with automated systems face additional barriers in accessing public benefits. Finally, another Respondent characterized the public benefits systems as “appalling,” stating that AI-drive tools “are impenetrable and confusing, responses are slow or inconsistent, it's difficult to get human assistance. They put terrible stress on already burdened clients.”
According to Respondents, laws, tools, legal remedies, and strategies to address these barriers included lawsuits; Title IV complaints; complaints to the Department of Labor; Office of Civil Rights complaints (if clients are unable to access the systems for lack of language options); manually reviewing determinations to ensure they are correct; and asking public benefits organizations to redesign their websites to better support applicants.
Procurement:
Overall, responses to questions on procurement of AI (not including the use of AI for procurement) indicated some degree of confusion with respect to the difference between legal due diligence and additional considerations for impacts of AI on low-income and other marginalized groups. For example, one survey Respondent seemed to suggest that anti-bias policies were sufficient to consider the impact that automated systems have on low-income or marginalized groups. However, economic justice is not necessarily the result of explicit bias against low-income or otherwise economically marginalized groups. Rather, it is usually the result of systemic barriers, difficulties, differences, or blind spots.
While some Respondents described comprehensive procurement procedures, there was a paucity of information on the skillset of decision makers; for example, are decision makers adequately trained to consider the impacts of specific kinds of AI and automated systems technology? Further, it was not clear whether those making procurement decisions had the subject matter capacity to evaluate risk and impacts on low-income and marginalized groups. This signals the need for more detailed guidance on procurement of AI/automated systems to avoid harm to low-income groups. At the same time, most safeguards described seemed to revolve around standard best-practices for legal compliance with general data protection regulations, which do not address impacts on low-income and marginalized groups.
With respect to useful resources, Respondents mentioned the ABA Legal Technology Buyer’s Guide contains the ABA’s Standing Committee on Legal Aid and Indigent Defense (SCLAID) tech standards, and the Washington Courts Tech Principles (Washington Supreme Court’s order for tech acquisition). Respondents also mentioned the National Institute for Standards and Technology (NIST) Framework for Trustworthy AI as a good start but commented that “it is not widely utilized or understood.” For lawyers serving low or lower-income clients, client accessibility and feedback were obvious considerations that forced them to think about the impact of automated systems on low-income or marginalized groups when making procurement decisions.
With respect to barriers facing people attempting to procure automated systems, Respondents indicated cost was a barrier to procuring secure automated systems technologies and that products providing reliable cybersecurity safeguards tended to be more expensive than less reliable products. Respondents also mentioned a lack of due diligence on the part of lawyers as a barrier to procuring reliable automated systems—commenting that rather than taking vendors at face value, lawyers should make a practice of seeking out independent opinions about the risks and liabilities of adopting certain systems.
Finally, Respondents noted the importance of procurement processes because it is a rapidly developing area that should be of increasing concern to the legal community. One Respondent urged that “[a]dvocating to the buying community to fight for trustworthiness and non-bias rather than accepting what is offered without diligent review is vital.” Another Respondent emphasized that critical decisions and decision-making roles should not be handed out to technical experts lacking in legal training. Even when technology is being deployed under contracts with third parties, those using the products (whether lawyers or clients) must work to become experts to mitigate risk.
Echoing these thoughts and highlighting a reoccurring theme of failure of AI translation tools across the legal community in all the sectors described above, another Respondent stated: “tech vendors say that AI can do translation--this is not true. Not in an effective accurate way. Legal groups buy that--and limited English proficient communities are provided ineffective/not helpful/confusing materials. They don’t test with professionals, they are all monolingual, so they don’t even know how language works outside of English, and the vendors push it--while it is not true. They waste grant/funding on those tools--thinking they are doing this great multilingual work--and what they are putting out is bad quality, bad for their brand.” The Respondent pointed out that the Department of Justice has said that machine translation is not legally sufficient to meet Title VI Limited English Proficiency standards of meaningful access. “Nevertheless, they still believe any vendor who asserts they have adequate multilingual solutions using machine translation. The level of complacency and naivety or refusal to acknowledge these problems is shocking.”
Conclusion
Ultimately, the message of this report is not just that economic justice is on the backburner when it comes to the development of AI, but more sharply, that a lack of focus on economic justice threatens the development, adoption, and proliferation of AI itself. If American businesses want to continue to dominate the AI sector, then they must go back to the basic democratic and inclusive principles that have long driven the American economy and consider the economic justice impacts of AI on low-income and other marginalized people.
The Literature Review highlighted that current ethical and policy considerations about AI do not provide adequate or adequately detailed consideration of economic justice impacts. While consideration is given to equity and fairness, these concepts are loosely defined. There are few if any discussions about how specific use cases impact low-income and other marginalized groups, in particular. Without greater attention to the impacts on low-income and other marginalized groups, including expanding basic access to the Internet for low-income and other marginalized groups and by ensuring that AI is equally accessible to differently abled groups, trust in AI will be hard won. In the face of growing income and wealth inequalities and without more thoughtful attention to the economic margins, AI (a tool with potential to realize the full rewards of diversity, enable meaningful inclusion, and strengthen democracy and the Rule of Law) risks become a weapon for undermining democratic principles.
According to the Standards Administration of China’s White Paper of AI Standardization, “Since AI is a future-shaping strategic technology, the world's developed nations are all striving for dominance in a new round of international competition, and issuing plans and policies centered around AI.” The one-hundred page report reveals China’s interest in creating AI that creates more complex intelligent systems, powers traditional industries, frees humans from monotonous labor, increases work efficiency, and reduces error rates. While these are all laudable and important goals, they are unlikely to spur either innovation or trust in AI. The European approach to AI seems to focus on a top-down strategy of reviving flagging economies and supporting specific industries, which is a risk-based approach to ensuring established rights and integrating into the global economy. Notably, most foreign policy documents do not even have economic inclusion in mind, unlike their American equivalents that, at the very least, acknowledge the potential benefits and risks of AI for low-income and other marginalized groups.
The US has been long concerned with ensuring that AI and technology benefit everyday people. This aspiration has powered American AI innovation as a steady and strong commitment to using AI to ensure a better life for all. As the tech industry rapidly globalizes in the context of an aggressive global race for AI dominance, it can be easy to forget this core drive at the foundation of not just American AI and technological innovation but as a cornerstone of American innovation and enterprise, writ large. An examination of the normative core developing around AI suggests that a commitment to economic inclusion has been mostly decentered at home and abroad. There has never been a more crucial moment to reframe AI’s normative core to ensure that genuine and thoughtful commitments to economic justice become a driving force for AI innovation and development in the United States moving forward.
Appendix I: AI Ethics & Legal Practice
As discussed in the Report’s Introduction, from a legal perspective, there are two components to AI Ethics. The first is related to the ethical qualities of an AI system, product, or tool, as it is designed or used. The second component, required for legal practice, relates to ethical use of AI systems, products, or tools when serving and advising clients.
As a general matter, basic rules of professional conduct still provide clear guidance on best practices for the use of AI tools themselves. For example, professional rules of conduct for lawyers unflaggingly impose basic duties of competence. These duties extend to emerging technology and require lawyers to keep abreast of new technologies. As a further example, Rule 3 of the ABA’s Model Rules of Professional Conduct and similar local rules urge lawyers to verify that any information generated using AI is truthful and accurate. The same rules also urge lawyers to rush to correct any information that was wrongly generated using AI, as soon as the error comes to attention. Similarly, professional conduct rules that urge supervision also urge appropriate supervision when it comes to the use of AI tools.
While state bar ethics hotlines can provide a wealth of information on current ethical rules that might apply to prospective legal conduct related to the use of AI by lawyers, ethical guidance is still evolving. Below is a list of ethics opinions on the use of AI available at the time of publication. These opinions address a variety of duties that need careful attention when using AI, (duties of confidentiality might create prohibitions against entering client information into AI tools without informed consent, rules on the creation of lawyer-client relationships might require disclaimers when using chatbots to communicate with or advise clients).
ABA Materials:
State Bar Opinions:
California
District of Columbia
Florida
Kentucky
Michigan
- State Bar of Michigan, Ethics Opinion JI-155 (issued October 27, 2023) (counseling that judges need to balance duties of competence to understand and properly use technology (including AI), as well as set boundaries to ensure they are used within the confines of the law and court rules).
- State Bar of Michigan, Artificial Intelligence for Attorneys—Frequently Asked Questions, https://www.michbar.org/opinions/ethics/AIFAQs.
New Jersey
Court Decisions
Appendix II: Annotated Bibliography
The Database of AI Litigation, George Washington University, https://blogs.gwu.edu/law-eti/ai-litigation-database/, (containing information about ongoing and completed litigation involving artificial intelligence, including machine learning).
Ivey Dyson, How AI Threatens Civil Rights and Economic Opportunities, November 16, 2023, Brennan Center for Justice.
- Summary: Well before the current interest in AI, “government agencies and companies are already employing AI systems that churn out results riddled with inaccuracies and biases that threaten civil rights, civil liberties, and economic opportunities.” “Inaccuracies or flawed designs within AI systems can also create barriers to accessing essential public benefits. In Michigan, one algorithm deployed by the state’s unemployment insurance agency wrongly flagged around 40,000 people as committing unemployment fraud, resulting in fines, denied benefits, and bankruptcy.”
- “A letter sent this month to Congress by the Brennan Center and more than 85 other public interest organizations suggests a place to start: draw on the expertise of civil society and the communities most impacted by these technologies to come up with regulation that addresses the harms AI is already causing while also preparing for its future effects.”
Rebecca Kelly Slaughter, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, ISP Digital Future Whitepaper & YJoLT Special Publication, Yale Journal of Law and Technology (August 2021).
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Cass R. Sunstein, Algorithms as Discrimination Detectors, 117 PROC. OF THE NAT’L ACAD. SCI. (Dec. 1, 2020), https://www.pnas.org/content/pnas/117/48/30096.full.pdf.
Matt Kasman & Jon Valant, The Opportunities and Risks of K-12 Student Placement Algorithms, BROOKINGS INST. (Feb. 28, 2019), https://www.brookings.edu/research/the-opportunities- and-risks-of-k-12-student-placement-algorithms/.
Cade Metz, London A.I. Lab Claims Breakthrough That Could Accelerate Drug Discovery, N.Y. TIMES (Nov. 30, 2020), https://www.nytimes.com/2020/11/30/technology/deepmind-ai-protein- folding.html.
Irene Dankwa-Mullan, et al., Transforming Diabetes Care Through Artificial Intelligence: The Future Is Here, 22 POPULAR HEALTH MGMT. 229, 240 (2019).
Alvaro Bedoya, The Color of Surveillance, SLATE (Jan. 18, 2016), https://slate.com/technology/2016/01/what-the-fbis-surveillance-of-martin-luther-king-says-about- modern-spying.html.
Amy Cyphert, Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students, 20 NEV. L. J. 457 (May 2020).
Clare Garvie, Alvaro Bedoya & Jonathan Frankle, The Perpetual Line-Up: Unregulated Police Face Recognition in America, GEO. L. CTR. PRIVACY & TECH. (Oct. 18, 2016), https://www.perpetuallineup.org.
Kashmir Hill, Another Arrest and Jail Time, Due to a Bad Facial Recognition Match, N.Y. TIMES (Jan. 6, 2021), https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify- jail.html.
Algorithms in the Criminal Justice System: Pre-Trial Risk Assessment Tools, ELEC. PRIVACY INFO. CTR., https://epic.org/algorithmic-transparency/crim-justice (last visited Jan. 17, 2020);
Jason Tashea, Courts Are Using AI to Sentence Criminals. That Must Stop Now, WIRED (Apr. 17, 2017), https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.
Adam S. Forman, Nathaniel M. Glasser & Christopher Lech, INSIGHT: Covid-19 May Push More Companies to Use AI as Hiring Tool, BLOOMBERG L. (May 1, 2020, 4:00 AM), https://news.bloomberglaw.com/daily-labor-report/insight-covid-19-may-push-more-companies-to- use-ai-as-hiring-tool.
Miriam Vogel, COVID-19 Could Bring Bias in AI to Pandemic Level Crisis, THRIVE GLOBAL (June 14, 2020), https://thriveglobal.com/stories/covid-19-could-bring-bias-in-ai-to-pandemic-level-crisis/.
Natasha Singer, Where Do Vaccine Doses Go, and Who Gets Them? The Algorithms Decide, N.Y. TIMES (Feb. 7, 2021), https://www.nytimes.com/2021/02/07/technology/vaccine-algorithms.html.
Eileen Guo & Karen Hao, This is the Stanford Vaccine Algorithm that Left Out Frontline Doctors, MIT TECH. REV. (Dec. 21, 2020), https://www.technologyreview.com/2020/12/21/1015303/stanford- vaccine-algorithm/.
Shoshana Zuboff, The Age of Surveillance Capitalism: the Fight for a Human Future at the New Frontier of Power (2019)
Julie E. Cohen, Between Truth and Power: the Legal Constructions of Informational Capitalism (2019)
European Parliament, Briefing: Economic Impacts of Artificial Intelligence (AI) (2019), https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf.
- “AI has significant potential to boost economic growth and productivity, but at the same time it creates equally serious risks of job market polarization, rising inequality, structural unemployment and emergence of new undesirable industrial structures.”
Chien, Colleen V. and Kim, Miriam, Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap (March 14, 2024). UC Berkeley Public Law Research Paper Forthcoming, Loyola of Los Angeles Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4733061
- “AI tools can significantly enhance legal professionals and narrow the justice gap, but that how they are introduced matter - though women comprise the majority of public interest lawyers, organic uptake of generative AI was much higher among men in our study. Assistance can also improve tool adoption. The participants’ positive experiences support viewing AI technologies as augmenting rather than threatening the work of lawyers. As we document, legal-aid lawyer directed technological solutions may have the greatest potential to not just marginally, but dramatically, increase service coverage, and we suggest some steps, such as exploring regulatory sandboxes and devising ways to institute voluntary certification or “seal of approval” programs verifying the quality of legal aid bots to support such generative collaborations. Along with the paper, we release a companion database of 100 helpful use cases, including prompts and outputs, provided by legal aid professionals in the trial, to support broader adoption of AI tools.”
Goldman Sachcs, Global Economics Analyst: The Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani).
- “If generative AI delivers on its promised capabilities, the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.”
AI Commission Report, U.S. Chamber of Commerce Technology Engagement Center, (2023), https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Report_v6.pdf.
“This debate must answer several core questions: What is the government’s role in promoting the kinds of innovation that allow for learning and adaptation while leveraging core strengths of the American economy in innovation and product development? How might policymakers balance competing interests associated with AI—those of economic, societal, and quality-of-life improvements—against privacy concerns, workforce disruption, and built-in-biases associated with algorithmic decision-making? And how can Washington establish a policy and regulatory environment that will help ensure continued U.S. global AI leadership while navigating its own course between increasing regulations from Europe and competition from China’s broad-based adoption of AI?”
“Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment…. A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.” (p. 10)
The Report names five pillars for AI Regulation: Efficiency, Collegiality, Neutrality, Flexibility, and Proportionality. (p. 11)
“Use an Evidence-Based Approach: Policymakers must take action to understand the potential impact of AI on the American workforce by leveraging new data sources and advanced analytics to understand the evolving impact of AI and machine learning on the American public.” (p. 12)
European Commission, A European Strategy for Artificial Intelligence (April 23, 2021), https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentation-CEPS-Webinar-L.-Sioli-23.4.21.pdf.
Government of Canada, Directive on Automated Decision-Making (April 1, 2021), https://www.tbs-sct.canada.ca/pol/doc-eng.aspx-?id=32592.
Government:
- White House Office of Science and Technology Policy (OSTP), Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (October 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill- of-Rights.pdf.
- AI.gov, The Government is Using AI to Better Serve the Public, (a compiled inventory of the use of AI across government agencies).
- US Executive Order on Safe, Secure and Trustworthy AI
- U.S. National Science and Technology Council, Preparing for the Future of AI (Oct. 2016).
- Standards Administration of China, White Paper of AI Standardization (Jan 2018)
- Villani, C. (2018), For a Meaningful Artificial Intelligence - Towards a French and European Strategy, AI for Humanity, https://www.aiforhumanity.fr/ (accessed on 15 December 2018). European Commission, AI for Europe (Apr. 2018).
- UK House of Lords, AI in the UK (Apr. 2018)
- NITI Aayog, Artificial Intelligence Committee, AI in the UK: ready, willing and able? HL Paper 100, Session 2017-19, https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm; National Strategy for Artificial Intelligence #AIFORALL, https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf
- British Embassy in Mexico City, AI in Mexico (Jun. 2018), https://go.wizeline.com/rs/571-SRN-279/images/Towards-an-AI-strategy-in-Mexico.pdf.
- German Federal Ministries of Education, Economic Affairs, and Labour and Social Affairs, AI Strategy (Nov. 2018), https://www.bundesregierung.de/breg-en/service/archive/ai-a-brand-for-germany-1551432.
- Smart Dubai, AI Principles and Ethics (Jan. 2019), https://digitaldubai.ae/docs/default-source/ai-principles-resources/ai-ethics.pdf.
- Monetary Authority of Singapore, Principles to Promote FEAT AI in the Financial Sector (Feb 2019)
- Government of Japan; Cabinet Office; Council of Science, Technology and Innovation, Social Principles of Human-Centric AI, https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf
- European High Level Expert Group on AI, Ethics Guidelines for Trustworthy AI (Apr. 2019), https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
- Chinese National Governance Committee for AI, Governance Principles for a New Generation of AI (Jun. 2019), https://www.loc.gov/item/global-legal-monitor/2019-09-09/china-ai-governance-principles-released/.
Intergovernmental Organizations:
- Council of Europe Convention on AI, Human Rights, Democracy and the Rule of Law
- Council of Europe: The European Commission for the Efficiency of Justice, European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment (Dec. 2018), https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c.
- OECD, OECD Principles on AI (May 2019), https://oecd.ai/en/ai-principles.
- G20, G20 AI Principles (June 2019), https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/pdf/documents/en/annex_08.pdf. (“Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.” at 1).
- The UN Global Digital Compact, https://www.un.org/techenvoy/global-digital-compact.
- European Union Artificial Intelligence Act, https://artificialintelligenceact.eu/.
Civil Society:
- Access Now, Human Rights in the Age of AI (Nov. 2018), https://www.accessnow.org/wp-content/uploads/2018/11/AI-and-Human-Rights.pdf/.
- The Public Voice Coalition, Universal Guidelines for AI, (Oct. 2018), https://thepublicvoice.org/ai-universal-guidelines/.
- T20: Think20, Future of Work and Education for the Digital Age (Jul. 2018), https://t20argentina.org/wp-content/uploads/2018/09/Bridges-to-the-Future-of-Education-Policy-Recommendations-for-the-Digital-Age.pdf.
- Amnesty International/Access Now, Toronto Declaration (May 2018), https://www.amnesty.org/en/documents/pol30/8447/2018/en/.
- UNI Global Union, Top 10 Principles for Ethical AI (Dec. 2017), https://uniglobalunion.org/report/10-principles-for-ethical-artificial-intelligence/.
- Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence (April 2024), https://nysba.org/app/uploads/2022/03/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf.
Private Sector:
- IBM, IBM Everyday Ethics for AI (Dec. 2017), https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf.
- IA Latam, Declaration of the Ethical Principles for AI (Feb. 2019)
- Tella Company, Guiding Principles on Trusted AI Ethics (Jan. 2019), https://linking-ai-principles.org/term/322.
- Telefónica, AI Principles of Telefónica (Oct. 2018), https://www.telefonica.com/en/wp-content/uploads/sites/7/2021/11/principios-ai-eng-2018.pdf.
- Google, AI at Google: Our Principles (Jun. 2018), https://ai.google/responsibility/principles/#:~:text=We%20will%20strive%20to%20make,on%20a%20non%2Dcommercial%20basis.
- Microsoft, Microsoft AI Principles (Feb. 2018), https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5cmFl?culture=en-us&country=us.
- ITI, AI Policy Principles (Oct. 2017), https://www.itic.org/news-events/news-releases/iti-unveils-first-industry-wide-artificial-intelligence-policy-principles.
- Tencent Institute, Six Principles of AI (Apr. 2017), https://ciss.tsinghua.edu.cn/upload_files/atta/1589021813147_7B.pdf.
- AI Industry Alliance, AI Industry Code of Conduct (Jun. 2019),
- Beijing Academy of AI, Beijing AI Principles (May 2019), https://link.springer.com/content/pdf/10.1007/s11623-019-1183-6.pdf.
- New York Times, Seeking Ground Rules for AI (Mar. 2019), https://www.nytimes.com/2019/03/01/business/ethical-ai-recommendations.html,
Multi-Stakeholders
- Partnership on AI, Tenets (Sep. 2016), https://partnershiponai.org/about/.
- Future of Life Institute, Asilomar AI Principles (Jan. 2017), https://futureoflife.org/open-letter/ai-principles/.
- University of Montreal, Montreal Declaration (Dec. 2018), https://montrealdeclaration-responsibleai.com/#:~:text=The%20Montr%C3%A9al%20Declaration%20is%20a,recommendations%20with%20strong%20democratic%20legitimacy.
- IEEE, Ethically Aligned Design (Mar. 2019), https://standards.ieee.org/news/ieee-ead1e/.
- AI Industry Code of Conduct (Jun 2019),
Other Resources:
- Hilke Schellmann, The Algorithm: How AI decided who gets hired, monitored, promoted & fired and why we need to fight back (2024)
- Brian Christian, The Alignment Problem: Machine Learning and Human Values (2020)
- Chris Wiggins, Matthew L. Jones, How Data Happened: A History from the Age of Reason to the Age of Algorithms (2023)
- Jamie Metzl, Superconvergence: How the Genetics, Biotech, and Revolutions will Transform our Lives, Work, and World (2024)
- Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2022)
- Michael Woodbridge, A Brief History of Artificial Intelligence
- Possible Minds: Twenty-Five Ways of Looking at AI. Edited by John Brockman.
- Margaret Boden, Artificial Intelligence and Natural Man (2016)
- Michael R. Genesereth and Nils J. Nilsson, Logical Foundations of Artificial Intelligence
- Booz, Allen, Hamilton, The Artificial Intelligence Primer.