chevron-down Created with Sketch Beta.
June 03, 2024 HUMAN RIGHTS

ABA Practitioner Survey Suggests Next Steps on AI

By the CRSJ AI and Economic Justice Project

Many credit recent developments in generative artificial intelligence (AI) tools like ChatGPT with the explosion of interest in AI, however, from a civil rights and economic justice perspective, there has been considerable interest in how automated systems have been impacting people’s lives for years, be it in terms of tenant screening, criminal justice, surveillance, or other areas. In 2023, the Civil Rights and Social Justice Section (CRSJ) of the American Bar Association (ABA) set out to better understand the impact of AI on low-income and other marginalized individuals and communities. After compiling a review of the literature, we developed and launched a survey and distributed it to attorneys in various practice settings, which received nearly 200 responses. Here are a few high-level takeaways from the results that could inform how the ABA might better support its members and the clients they serve.

In 2023, CRSJ set out to better understand the impact of AI on low-income and other marginalized individuals and communities.

In 2023, CRSJ set out to better understand the impact of AI on low-income and other marginalized individuals and communities.


In short: many attorneys are familiar with the impact of automated systems on their clients but are not familiar with the details of how those systems work and are uncomfortable explaining them. Especially in the employment context, many are not familiar with federal or state laws governing the use of automated systems, and the majority of respondents were not comfortable advising clients on legal impacts. They note that these systems are not accessible to the public and are biased or at least use biased datasets. Survey respondents additionally noted that there is a lack of access to the technology required to benefit from automated systems (e.g., reliable broadband access) and that automated denials are often without any transparency as to why an applicant was rejected and without any opportunity to provide context for the derogatory data point that caused that result.

Who Took the Survey?

More than 66 percent of respondents work in a small or larger city, 11 percent work in multiple geographic settings, and 7 percent work in remote rural areas. In terms of clients served (respondents could select multiple options), 30 percent serve low-income individuals, 17 percent serve lower-middle-income individuals, 10 percent serve enterprises, and 96+ percent serve individuals from marginalized backgrounds (racial or ethnic minority, LGBTQ+, disability, immigrant or refugee, or other). More than 25 percent of respondents work in consumer law, housing, or public benefits, but some work in criminal justice, immigration, procurement, education, or employment as well.

What the Survey Takers Said

Approximately one-third of survey respondents indicated they are familiar with each of the following: automated surveillance, fraud detection, generative AI, and risk scoring, but more than 70 percent said they are at least slightly uncomfortable with explaining how any of the above systems worked. Nearly half indicated that they didn’t know when automated systems were even being used, but nearly 60 percent agreed that they knew how the systems impacted their clients. More than half indicated that they had not taken training or CLEs on automated systems and did not know where to find such training. More than half questioned whether automated systems were accurate, and a similar percentage felt the systems were biased. In terms of their own work, more than 60 percent did not change the way they work because of automated tools available to lawyers.

What the Results Mean

Although our 180+ survey respondents may not be a completely representative sample of the ABA or the profession more broadly, a few key things stand out to us: the ABA should offer training and CLEs on automated systems; should advocate for greater transparency and accuracy and less bias in automated systems; and should consider what it can do to ensure that marginalized communities such as those without reliable broadband access are not victimized by these systems without ever having a voice in their development or use, nor overlooked as more affluent communities celebrate the benefits of advances in technology despite those benefits not being universally enjoyed by everyone subject to the laws. CRSJ will be working to advance these goals within the ABA, with support from the ABA president’s newly minted Taskforce on AI (which includes focuses on risk mitigation and access to justice) and other groups.