chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo September/October 2024: Election Law

Election Law, Artificial Intelligence, and Alternative Facts

Kim Wyman and Carah Ong Whaley

Summary

  • The rapid deployment of AI has the potential to undermine election integrity in the United States and around the world.
  • Generative artificial intelligence (GenAI) technologies such as deepfakes can be used to confuse voters about rules and to discourage voting among particular demographics.
  • Countries such as Russia, China, Iran, and Venezuela are purposefully experimenting with GenAI to manipulate the information ecosystem and undermine democracy.
  • A majority of Americans believe that federal action is needed to curb the use of fake images and videos generated by AI in elections and political campaigns.
Election Law, Artificial Intelligence, and Alternative Facts
stevanovicigor via Getty Images

Jump to:

“Alternative facts” was a phrase coined by Kellyanne Conway, U.S. counselor to the president, during a Meet the Press interview on January 22, 2017, when she was asked to defend a false statement about the attendance numbers of Donald Trump’s inauguration as president of the United States. With the rapid proliferation of accessible artificial intelligence (AI) tools that generate realistic text, audio, and images, it’s not just false statements we have to worry about. The rapid deployment of AI, particularly generative capabilities such as deepfake creation, is cheaper, easier, and more likely to manipulate public perceptions, posing unprecedented threats to our information ecosystem with grave implications for informed participation in democracy. These tools have the potential to undermine election integrity in the United States and around the world, with the 2024 campaign potentially becoming the election of artificial facts.

Artificial Intelligence: Potential Benefits, Potential Threats

Generative artificial intelligence (GenAI) technologies are just tools and can also be used as a force for good in elections. For a challenger running on a tight budget without the funds for ads, campaign materials, a professionally crafted slogan, or a team of volunteers to reach out to voters online or by phone, AI could provide a chance to level the playing field against better-funded incumbents. For election officials looking to boost communication with voters, streamline resource allocation, catch fraud or irregularities in voting, or verify signatures on mail-in ballots, AI could also be a powerful tool to help them do their jobs more effectively. But without regulation, the same technologies can also be deployed to manipulate voters and undermine confidence in elections.

GenAI technologies give any actor, including foreign adversaries, the ability to systematically create hyperrealistic content, generating copies and impersonations of faces and voices that can be nearly impossible to distinguish from real life. In combination with personal data, such tools can also be used as part of a political strategy to confuse voters about rules and to discourage voting among particular demographics or within specific geographic areas or communities. What’s more, they can also be used as part of malign influence operations to further sow divisions among the electorate and destabilize domestic politics. In addition to increasing the risk of the public believing false information, the use of AI to generate and share content also erodes public trust in authentic content.

The challenge also extends to election administration: AI can be used to generate malware to attack election infrastructure, target election offices, and automate the harassment of election workers. Given the decentralized nature of U.S. elections, the burden of addressing this unprecedented issue falls on state and local officials. Recently, Grok, an AI chatbot operated by X (formerly known as Twitter), disseminated false information regarding ballot laws in nine states. Urging the platform to respond, Minnesota Secretary of State Steve Simon and four other secretaries of state wrote a letter to X owner Elon Musk, urging him to correct the misinformation. The letter highlights the broader challenges posed by AI models that often fail to provide accurate voting information and suggests a simple, proactive solution: directing users to resources provided by election officials, such as CanIVote.org.

Deepfakes

Deepfakes are videos, photos, or audio recordings that seem real but have been manipulated with AI. They allow any actor—domestic or foreign, state or nonstate—to invent or reshape reality in the form of images, audio, and video. The underlying technology can replace faces, manipulate facial expressions, and synthesize faces and speech. Deepfakes can depict someone appearing to say or do something that they, in fact, never said or did. The technology can be used to depict or share events that never happened or to recontextualize events that did.

There are already examples of this type of manipulation by domestic actors that have impacted the 2024 election, from synthetic robocalls of President Joe Biden encouraging voters to skip the primaries in New Hampshire, to deepfaked images of President Donald Trump used to propel “inside job” conspiracy theories about the first assassination attempt on his life, to pro-Kremlin propagandists using AI-generated audio as evidence for false claims in articles that Barack Obama suggested that the Democratic Party was behind this failed assassination attempt against Donald Trump.

AI technologies and the ways they are used are continuously improving and evolving. As a result, it is less expensive, easier, and faster to produce new content or “synthetic media,” including convincing but false images of public figures and events.

Use of AI by Foreign Nations to Disrupt Democracy

Research shows that some countries—including Russia, China, Iran, and Venezuela—are purposefully experimenting with GenAI to manipulate the information ecosystem and undermine democracy. These countries have used AI technologies to disrupt democratic elections around the world. Intelligence agencies have repeatedly warned that the United States must be prepared to address the threats posed by the use of AI for malign purposes in U.S. elections. For example, in its 2024 annual report, the Office of the Director of National Intelligence warned that Russia, China, and Iran are “growing more sophisticated in digital influence operations that try to affect foreign publics’ views, sway voters’ perspectives, shift policies, and create social and political upheaval.” The range of strategies and tools they use for malign influence efforts have both improved and expanded to leverage all elements of the information space, including ownership of online media outlets and tech platforms, business and advertising pressure, and traditional censorship techniques, as well as deepfakes, bot armies, and microtargeting.

Furthermore, large language models (LLMs) and synthetic media generators that can convert text to image, text to audio, and text to video present additional critical challenges to the information environment broadly that can be applied to elections specifically. Previously, it was feasible to identify foreign inauthentic accounts through their misuse of the English language. However, LLMs have made it possible to translate content from any language into nearly flawless English. Suppressing the votes of marginalized communities, including language-minority citizens, is a very old political strategy, but AI tools can translate text and audio across languages, greatly reducing the time and resources previously required to target language groups. Bad actors may also dub or subtitle an official news source with false information in order to confuse voters. In addition, LLMs can be used to rapidly generate automated and persuasive propaganda that can be scaled up and distributed widely across a range of digital platforms.

A May 2024 survey from Issue One with citizen data found that a vast majority of respondents said they either “strongly agree” or “somewhat agree” with the statement “To what extent do you agree or disagree that Congress should take action to address the spread of false election information through the use of fake images and videos generated by artificial intelligence.”

Can We Create Safeguards in Time?

Federal Safeguards

A majority of Americans agree: Federal action is needed to curb the use of fake images and videos generated by AI in elections and political campaigns (see chart on previous page). At the federal level, the AI Transparency in Elections Act (S. 3875), the Protect Elections from Deceptive AI Act (S. 2770), and the Preparing Election Administrators for AI Act (S. 3897) are bipartisan, vetted legislation that build necessary safeguards against the negative effects of this rapidly developing technology. While more must be done to buttress our election infrastructure, particularly through congressional appropriation of robust and consistent federal election security grants, these bills would enable some initial necessary safeguards. The Biden administration also issued an executive order requiring the “watermarking,” or clear labeling, of AI-created content.

The Federal Election Commission (FEC) has been dragging its feet on a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election, and as of this writing (late summer 2024), it appears it will fail to do so. Intentional misrepresentations of voting rules are already a felony, but the FEC ought to use its authority to clarify how existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes. However, even if the FEC miraculously acts to clarify the law on fraudulent misrepresentation, it wouldn’t enable the agency to require outside groups, such as political action committees (PACs), to disclose when they imitate a candidate using AI technology. So, loopholes remain.

While the FEC has been slow to action, the Federal Communications Commission (FCC) has proposed rules including both on-air disclosures and written disclosures in broadcasters’ political files when AI-generated content is used, especially to create deceptive deepfakes. The FCC’s proposed rules are especially important because broadcasters do not interpret current rules as requiring disclosure of AI-generated content, and, in fact, broadcasters may go so far as to interpret current FCC rules as prohibiting broadcasters from requiring disclosure. The proposed FCC rules would extend the disclosure requirements to both candidate and issue advertisements, which is important for recognizing the broad impact of AI-generated content and ensuring that all political messaging is subject to the same transparency standards. Finally, the proposed FCC rules would apply the disclosure requirements to all pertinent content carriers, which would ensure that all major content platforms under the FCC’s jurisdiction are covered at the local, state, and federal levels. However, as previously noted, a wide range of actors use AI, and while the FCC’s rules would address political campaigns, they are no panacea.

State Safeguards

Many states have also enacted measures to address the use of generative AI in elections, including Alabama, Arizona, California, Colorado, Delaware, Florida, Hawaii, Idaho, Indiana, Michigan, Minnesota, Mississippi, New Hampshire, New Mexico, New York, Oregon, Texas, Utah, and Wisconsin. Other states, including New Jersey, Massachusetts, and Virginia, are considering such laws.

When it comes to the use of generative AI in political ads, many states have adopted a straightforward solution: requiring clear disclosures. The idea is to let people know when they’re seeing or hearing content that’s been crafted by AI. Most states with laws on this issue simply ask for a disclosure—whether it’s an audio or text note—stating that the ad includes AI-generated content.

Utah, for example, requires disclosures depending on what the AI is used for. If the ad only features AI-created visuals, viewers will see a message like “This video content generated by AI.” If it’s just audio that’s synthetic, listeners will hear a statement like, “This audio content generated by AI.” Florida has also adopted a similar approach with regard specifically to the use of GenAI in elections. If an ad shows someone doing something they didn’t actually do with the intention to injure a candidate, Florida law requires a disclosure: “Created in whole or in part with the use of generative artificial intelligence (AI).” Failing to include this warning doesn’t result in just a slap on the wrist—it’s a first-degree misdemeanor for anyone who funds, sponsors, or approves the ad.

While disclosures are an important first step, they may not go far enough to prevent harm. It’s possible that viewers or listeners may not see or hear disclosures, for example.

Not all states merely require disclosures. Texas, for instance, has taken a tougher stance. They’ve made it a crime to publish a “deepfake” video within 30 days of an election “with intent to injure a candidate or influence the result of an election.” However, there’s already been some pushback—one Texas court ruled that this law was unconstitutional because it wasn’t narrowly focused on a compelling state interest. Arizona’s law stops people from impersonating a candidate or someone on the ballot, but it doesn’t cover other types of impersonations, such as a fake news anchor talking about a candidate.

In addition, it is still very difficult to detect and trace the origin of GenAI content, and enforcing statutes and policies poses myriad challenges. Take, for example, social media companies, which have largely failed to address the threats posed by the spread of disinformation. Most companies have severely cut back content moderation teams and shelved fact-checking and tracking tools.

What to Do: Touch Grass

The metaphor “touch grass” is used when someone is spending too much time online and needs to spend time outside, disconnect from technology, and engage with the physical world. Surveys show that the majority of Americans are concerned that AI will be used to manipulate the outcome of the 2024 elections. And they are right to be concerned. Voters need to be aware that bad actors might spread false information using AI tools to create highly personalized and interactive content that misleads people about conditions at voting sites, voting rules, or whether voting is worthwhile. Voters should also know that they may be targeted with false content that is highly specific or personal. False content may come to them in texts, messaging apps, and phone calls. We all need to “touch grass” by acting only on election information from official and credible sources and always double-checking any information about a polling location by contacting the local elections office or visiting a state’s election website.

Lawyers can “touch grass” by getting involved with the American Bar Association’s Democracy Task Force, whose mission, in part, is to “bolster voter confidence in elections by safeguarding the integrity and non-partisan administration of elections, and by providing support for election workers and officials.” Lawyers can also help address the dangers posed by AI technologies by being at the forefront of efforts to enact legislation at the federal level and in states that have yet to do so. They can also contribute their expertise on the enforcement side to ensure those who violate laws are held accountable.

    Authors