Can We Create Safeguards in Time?
Federal Safeguards
A majority of Americans agree: Federal action is needed to curb the use of fake images and videos generated by AI in elections and political campaigns (see chart on previous page). At the federal level, the AI Transparency in Elections Act (S. 3875), the Protect Elections from Deceptive AI Act (S. 2770), and the Preparing Election Administrators for AI Act (S. 3897) are bipartisan, vetted legislation that build necessary safeguards against the negative effects of this rapidly developing technology. While more must be done to buttress our election infrastructure, particularly through congressional appropriation of robust and consistent federal election security grants, these bills would enable some initial necessary safeguards. The Biden administration also issued an executive order requiring the “watermarking,” or clear labeling, of AI-created content.
The Federal Election Commission (FEC) has been dragging its feet on a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election, and as of this writing (late summer 2024), it appears it will fail to do so. Intentional misrepresentations of voting rules are already a felony, but the FEC ought to use its authority to clarify how existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes. However, even if the FEC miraculously acts to clarify the law on fraudulent misrepresentation, it wouldn’t enable the agency to require outside groups, such as political action committees (PACs), to disclose when they imitate a candidate using AI technology. So, loopholes remain.
While the FEC has been slow to action, the Federal Communications Commission (FCC) has proposed rules including both on-air disclosures and written disclosures in broadcasters’ political files when AI-generated content is used, especially to create deceptive deepfakes. The FCC’s proposed rules are especially important because broadcasters do not interpret current rules as requiring disclosure of AI-generated content, and, in fact, broadcasters may go so far as to interpret current FCC rules as prohibiting broadcasters from requiring disclosure. The proposed FCC rules would extend the disclosure requirements to both candidate and issue advertisements, which is important for recognizing the broad impact of AI-generated content and ensuring that all political messaging is subject to the same transparency standards. Finally, the proposed FCC rules would apply the disclosure requirements to all pertinent content carriers, which would ensure that all major content platforms under the FCC’s jurisdiction are covered at the local, state, and federal levels. However, as previously noted, a wide range of actors use AI, and while the FCC’s rules would address political campaigns, they are no panacea.
State Safeguards
Many states have also enacted measures to address the use of generative AI in elections, including Alabama, Arizona, California, Colorado, Delaware, Florida, Hawaii, Idaho, Indiana, Michigan, Minnesota, Mississippi, New Hampshire, New Mexico, New York, Oregon, Texas, Utah, and Wisconsin. Other states, including New Jersey, Massachusetts, and Virginia, are considering such laws.
When it comes to the use of generative AI in political ads, many states have adopted a straightforward solution: requiring clear disclosures. The idea is to let people know when they’re seeing or hearing content that’s been crafted by AI. Most states with laws on this issue simply ask for a disclosure—whether it’s an audio or text note—stating that the ad includes AI-generated content.
Utah, for example, requires disclosures depending on what the AI is used for. If the ad only features AI-created visuals, viewers will see a message like “This video content generated by AI.” If it’s just audio that’s synthetic, listeners will hear a statement like, “This audio content generated by AI.” Florida has also adopted a similar approach with regard specifically to the use of GenAI in elections. If an ad shows someone doing something they didn’t actually do with the intention to injure a candidate, Florida law requires a disclosure: “Created in whole or in part with the use of generative artificial intelligence (AI).” Failing to include this warning doesn’t result in just a slap on the wrist—it’s a first-degree misdemeanor for anyone who funds, sponsors, or approves the ad.
While disclosures are an important first step, they may not go far enough to prevent harm. It’s possible that viewers or listeners may not see or hear disclosures, for example.
Not all states merely require disclosures. Texas, for instance, has taken a tougher stance. They’ve made it a crime to publish a “deepfake” video within 30 days of an election “with intent to injure a candidate or influence the result of an election.” However, there’s already been some pushback—one Texas court ruled that this law was unconstitutional because it wasn’t narrowly focused on a compelling state interest. Arizona’s law stops people from impersonating a candidate or someone on the ballot, but it doesn’t cover other types of impersonations, such as a fake news anchor talking about a candidate.
In addition, it is still very difficult to detect and trace the origin of GenAI content, and enforcing statutes and policies poses myriad challenges. Take, for example, social media companies, which have largely failed to address the threats posed by the spread of disinformation. Most companies have severely cut back content moderation teams and shelved fact-checking and tracking tools.
What to Do: Touch Grass
The metaphor “touch grass” is used when someone is spending too much time online and needs to spend time outside, disconnect from technology, and engage with the physical world. Surveys show that the majority of Americans are concerned that AI will be used to manipulate the outcome of the 2024 elections. And they are right to be concerned. Voters need to be aware that bad actors might spread false information using AI tools to create highly personalized and interactive content that misleads people about conditions at voting sites, voting rules, or whether voting is worthwhile. Voters should also know that they may be targeted with false content that is highly specific or personal. False content may come to them in texts, messaging apps, and phone calls. We all need to “touch grass” by acting only on election information from official and credible sources and always double-checking any information about a polling location by contacting the local elections office or visiting a state’s election website.
Lawyers can “touch grass” by getting involved with the American Bar Association’s Democracy Task Force, whose mission, in part, is to “bolster voter confidence in elections by safeguarding the integrity and non-partisan administration of elections, and by providing support for election workers and officials.” Lawyers can also help address the dangers posed by AI technologies by being at the forefront of efforts to enact legislation at the federal level and in states that have yet to do so. They can also contribute their expertise on the enforcement side to ensure those who violate laws are held accountable.