chevron-down Created with Sketch Beta.
October 13, 2024 ABA Task Force for American Democracy

Artificial Intelligence’s Threat to Democracy

Jen Easterly, Scott Schwab, and Cait Conley, Foreign Affairs, Jan. 3, 2024

Summary

This article examines ways in which artificial intelligence might be deployed to undermine trust in democratic elections, and it proposes steps that agencies, companies, and individuals should take to prevent AI threats.

Key Findings/Message

Artificial intelligence (AI) makes it easier, faster, and cheaper to spread election-related misinformation, which nefarious domestic actors or foreign enemies could use to undermine American democracy. Indeed, foreign interference in U.S. elections, such as hacking campaigns or spreading disinformation on social media, has been increasing in recent years. One concerning example occurred just before Slovakia’s elections in September 2023, when a fake audio recording on Facebook purported to show the leader of one of Slovakia’s political parties explaining to a journalist how to rig the election.

AI-generated content could be used to impersonate or target individuals, such as election officials or political candidates. It could be used to undermine trust in voter registration data, voting procedures, or election returns, which could then exacerbate threats against election officials. AI can generate not only deepfakes and misinformation used to target specific individuals or organizations, but also computer code and malware that could undermine election websites, servers, and communication channels.

Nevertheless, election officials are resilient and have a strong track record of adapting to unforeseen circumstances, such as hurricanes and other adverse weather, a pandemic, and other local disruptions. Officials have also recently established stronger digital and physical security protocols to maintain election integrity amid malicious attacks, supply chain disruptions, and other threats.

Key Recommendations

  • Artificial intelligence (AI) makes it easier, faster, and cheaper to spread election-related misinformation, which nefarious domestic actors or foreign enemies could use to undermine American democracy. Indeed, foreign interference in U.S. elections, such as hacking campaigns or spreading disinformation on social media, has been increasing in recent years. One concerning example occurred just before Slovakia’s elections in September 2023, when a fake audio recording on Facebook purported to show the leader of one of Slovakia’s political parties explaining to a journalist how to rig the election.
  • AI-generated content could be used to impersonate or target individuals, such as election officials or political candidates. It could be used to undermine trust in voter registration data, voting procedures, or election returns, which could then exacerbate threats against election officials. AI can generate not only deepfakes and misinformation used to target specific individuals or organizations, but also computer code and malware that could undermine election websites, servers, and communication channels.
  • Nevertheless, election officials are resilient and have a strong track record of adapting to unforeseen circumstances, such as hurricanes and other adverse weather, a pandemic, and other local disruptions. Officials have also recently established stronger digital and physical security protocols to maintain election integrity amid malicious attacks, supply chain disruptions, and other threats.