chevron-down Created with Sketch Beta.
June 03, 2024 HUMAN RIGHTS

Human Rights Challenges with Artificial Intelligence

By Lucy L. Thomson and Trooper Sanders

The American Bar Association (ABA) has a long history of identifying the most important legal issues of our time. Since 2001, the ABA Center for Human Rights has conducted critical human rights initiatives ranging from international criminal justice to public health. The rapid and widespread adoption of artificial intelligence (AI) technology in every industry, in government, and beyond raises complex questions about the broader impact on society and fundamental rights. 

The ABA Task Force on Law and AI addresses the legal challenges of AI, the impact of AI on the practice of law, and related ethical implications.

The ABA Task Force on Law and AI addresses the legal challenges of AI, the impact of AI on the practice of law, and related ethical implications.


With every transformation come complex and challenging legal and ethical questions. The impact of AI on human rights presents new challenges. Privacy and combatting discrimination have been initial areas of focus. AI systems rely on huge amounts of data, including sensitive personal data, to train algorithms and enhance performance. AI also is being used to make predictions and decisions about individuals, both as consumers and as employees, raising questions about algorithmic bias and possible discriminatory effects.

In the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People released in October 2022, the White House identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of AI. The Blueprint addresses data privacy and two other human rights principles: the right to be protected from unsafe or ineffective AI systems and the right to receive notice and explanation of algorithmic decisions impacting individuals’ lives.

Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued October 30, 2023, declared that AI policies must advance equity and civil rights: “From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.” In addition, the EO emphasized that “Americans’ privacy and civil liberties must be protected as AI continues advancing. Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed.” 

It is important to assess the wide range of human rights consequences of introducing AI decision-making into a variety of fields, including criminal justice (risk assessments); finance (credit scores); health care (diagnostics); content moderation (standards enforcement); human resources (recruitment and hiring); and education (essay scoring). In addition, the human rights consequences of the use of AI in warfare, including applications beyond human control, are concerning.

Internationally, the European Union (EU) AI Act protects human rights by banning from the EU some AI systems on the grounds that they pose an unacceptable risk to basic rights and freedoms. They include emotion-recognition tools in the workplace or educational institutions, as well as the biometric categorization of sensitive data, such as sexual orientation, and some cases of predictive policing. The use of real-time remote biometric identification, or facial recognition, in public places is prohibited, except for use by law enforcement officials to prevent terrorism and to search for victims or perpetrators of serious crimes. High-risk AI systems must undergo a fundamental rights impact assessment before being introduced to the EU market, and public entities using high-risk AI must register them on a database. Citizens will be able to demand explanations about AI systems’ decisions that impact their rights.

The ABA Task Force on Law and Artificial Intelligence was launched in August 2023 to address the legal challenges of AI, the impact of AI on the practice of law, and related ethical implications. The AI Task Force is working with lawyers, judges, and technology experts across the ABA to evaluate approaches to governance and assessing AI risks. To consider the impact of AI on laws, policies, and frameworks, the Task Force has created seven working groups focused on these AI issues: (1) AI and the Profession and Ethical Issues; (2) AI and the Courts; (3) AI Governance; (4) AI Risks; (5) AI Challenges: Generative AI; (6) Access to Justice; and (7) AI and Legal Education. The ABA has been at the forefront by urging the adoption of the guardrails of human oversight and control, accountability, and transparency in AI (Resolution 604, adopted February 2023).

The AI Task Force invites everyone to explore webinars, articles, and news updates from across the ABA that are available on the website at These materials offer valuable insights into emerging legal risks from the use of AI and how to use AI in a trustworthy and responsible manner. These materials should also inform the critical and ongoing discussion by ABA members about balancing the need to not stifle technological innovation while ensuring that societal values are protected. 

Lucy L. Thomson

Principal, Livingston PLLC, Washington, D.C.

Lucy L. Thomson is principal of Livingston PLLC in Washington, D.C., where she focuses her practice on cybersecurity, global data privacy, compliance, and risk management. She is chair of the ABA Task Force on Law and Artificial Intelligence.

Trooper Sanders

CEO, Benefits Data Trust

Trooper Sanders is the CEO of Benefits Data Trust, a nonprofit organization working to improve the effectiveness of social safety net programs. He is a member of the National AI Advisory Committee and a special advisor to the ABA AI Task Force.