chevron-down Created with Sketch Beta.
May 06, 2024 ABA Task Force for American Democracy

Deepfakes and American Elections

N. David Bleisch

Background

Innovation is a double-edged sword.  This is poignantly exemplified by the digital phenomenon known as “deepfakes.”  Deepfakes are hoax images, sounds, and videos that convincingly depict people saying or doing things that they did not actually say or do.  Enabled by generative artificial intelligence and other sophisticated technologies, deepfakes have created new opportunities, but also alarming vulnerabilities.  When used maliciously, these technologies can distort reality, undermine trust, and damage reputations, with potentially dire consequences for democratic processes. The possible exploitation of deepfakes to manipulate political landscapes and, more specifically, to harm political candidates or election officials, skew election results or falsely question the integrity of electoral outcomes is an area of vital concern for the Task Force.

In a demonstration of the power of these technologies, a deepfake video of former President Barack Obama was created by comedian Jordan Peele in 2018 to raise awareness about the potential dangers of deepfakes.   The technologies available to create convincing deepfakes, including generative artificial intelligence, have advanced significantly since then.  Political deepfakes have already been used in the United States.  In 2023, deepfake technology was used to clone the voice of a Chicago mayoral candidate on a fake news outlet on Twitter that made it appear that the candidate condoned police violence.  Also, this year in 2023, the DeSantis campaign shared an attack ad that uses AI-generated deepfake images of presidential candidate Donald Trump hugging Anthony Fauci, the former White House chief medical advisor, who is disliked and derided by many in Trump’s base.  In 2024, in what the New Hampshire’s attorney general’s office said “appear[s] to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters”, the attorney general is investigating robocalls that use a voice that sounds like the voice of President Biden that encouraged listeners not to vote in the primary election.

In apparent recognition of the harm that can be caused by political deepfakes, Google updated its political content policy in September 2023 to require that verified election advertisers “must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.” The update includes examples of ad content that would require disclosure, including “synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do.”  Similarly, in November 2023 Microsoft announced plans to help protect electoral processes from deepfakes, including an upcoming “Content Credentials as a Service” tool.  Meta also announced a new policy requiring advertisers to disclose whenever a political advertisement on Facebook or Instagram includes a material deepfake.

These instances underscore the potential havoc deepfakes can wreak on elections if left unchecked.  It is recommended that this task force urge state lawmakers to step up their efforts to deter this political misuse of deepfakes by providing them with model legislation for their use and adoption.

Problem Statement

How can bad actors be deterred from and held accountable for attempting to unfairly influence elections or falsely impugn election outcomes by using artificially generated deepfakes?

Proposed Solution

The Task Force should propose model state legislation prohibiting the use of artificially generated deepfakes for the purpose of harming election officials or candidates for public office or for the purpose of improperly influencing elections or election outcomes.  The proposed model legislation makes it a crime to create a false impression of voter fraud, falsely claim or imply that an election has been stolen, falsely impugn the integrity or security of an election or balloting system, falsely discredit the veracity of electoral results, or otherwise convey false information intended to erode confidence or participation in an election, unless the deepfake contains a specified disclosure stating that it is a deepfake.  The proposed model legislation is attached as Addendum I.

Similar state laws have already been passed.  In 2019, Texas passed a law making it a Class A misdemeanor punishable by up to a year in jail and a $4,000 fine to create a deepfake video and publish it within 30 days of an election with the intent to injure a candidate or influence the result of an election.  In 2019, California passed a law prohibiting the distribution with actual malice (defined as with knowledge of manipulation or with reckless disregard of whether or not there was manipulation) of a materially deceptive media of a candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate within 60 days of an election unless the media includes a disclosure that the media has been manipulated.

A federal bill (H.R. 6088) was also introduced in the 116th Congress (2019-2020) that would have made it a crime to distribute deepfakes without specified disclosures within 60 days of a federal election with actual malice with the intent to harm a candidate’s reputation or deceive a voter into voting for or against a candidate. The wording of large portions of the California law and the failed federal bill are nearly identical.  Violators could be punished with a fine, up to 5 years in prison, or both.  That bill did not pass.

In 2023, Minnesota passed a law prohibiting the dissemination of a deepfake within 90 days before an election with the intent to injure a candidate or influence the outcome of an election.  Violators can be punished by fines and up to 90 days in prison for the first offense and up to 5 years in prison for additional offenses.  Washington passed a law that bans the use of synthetic media to harm or influence candidates or elections in the state.  That law provides for injunctive relief and the recovery of damages unless the media includes a specified disclosure.  In November 2023, Michigan passed a law that requires all qualified political advertisements “generated in whole or substantially by artificial intelligence” to contain a disclosure specifying same.

The U.S. Federal Election Committee was petitioned by Public Citizen to clarify that an existing law against campaign ad “misrepresentations” applies to AI-generated deepfakes.  It is unclear whether the FEC will determine that it has the authority to regulate such ads. 

A bipartisan bill titled “The Protect Elections from Deceptive AI Act” was introduced in the U.S. Senate in September 2023.  The federal bill would prohibit the distribution of materially deceptive AI-generated media related to candidates for Federal office. As of the date of this working paper, the federal bill is still under consideration by the Senate Rules and Administration Committee. If the federal bill is eventually enacted, it will only apply to federal elections.  The proposed model state legislation also applies to state and local elections.  Additionally, the federal bill does not apply to malicious deepfakes of election officials or deepfakes concerning election integrity that falsely impugn the outcome of an election.  The proposed model legislation does.

The proposed model legislation is narrowly drafted to withstand First Amendment or other ‘free speech’ challenges.  The model legislation includes (a) an exception if the person deep-faked provides their consent, (b) an ‘actual malice’ standard (with knowledge of falsity or reckless disregard for the truth) similar to that for defamation actions involving public figures or controversies, (c) a limitation on its applicability to a specified period before or after an election, (d) an exception if the deepfake contains a specified disclosure stating that it is a deepfake, (e) exemptions for media outlets and providers that distribute an otherwise prohibited deepfake if specified conditions are met to avoid chilling legitimate news reporting, and (f) an exemption for deepfakes that constitute parodies or satire.

Next Step

If the Task Force elects to pursue this idea, the next step would be to engage the Task Force and/or appropriate lobbyists to help identify legislative sponsors in the remaining states and the District of Columbia.  A Fact Sheet (patterned after the one used in connection with passing the California law) that can be used to help explain the purpose of the proposed model legislation is attached as Addendum II.

Addendum I

Model Legislation Prohibiting the Creation and/or Dissemination of a Deepfake to Maliciously Harm a Political Candidate or Election Official, Improperly Influence the Outcome of an Election or Falsely Impugn an Election Outcome and Similar Actions

Section 1. Purpose

The purpose of this law is to prevent the use of deepfakes created with generative artificial intelligence and other sophisticated technologies from being used to maliciously harm political candidates or election officials, improperly influence elections, or falsely impugn election outcomes and similar actions.

Section 2. Definitions

As used herein, the following terms have the meanings given.

“Candidate” means an individual who is seeking nomination or election to a federal, state, legislative, judicial, or local office.

“Deepfake” means any technology-generated image of a person or any video, sound, image, or other technological representation of speech or conduct substantially derivative thereof:

(a)  that appears to a reasonable person to depict speech or conduct of a person who did not in reality engage in such speech or conduct or otherwise present something that did not, in fact occur; and

(b) if the depicted individual is a real person, was created or produced without the consent of the depicted individual.

“Depicted individual” means an individual who appears in a deepfake to be engaging in speech or conduct in which they did not in reality engage.

“Disseminate” means to distribute to one or more persons or produce, broadcast or otherwise publish by or through any publicly available medium.

“Election official” means an individual who is authorized by law, regulation, or by appointment by a competent authority to perform functions related to the preparation, conduct, and oversight of elections and electoral processes.  This includes, but is not limited to, individuals with statewide or local election oversight responsibilities, individuals engaged in the registration of voters and the maintenance of voter rolls, staff responsible for the certification, testing, deployment and security of voting equipment, poll workers, precinct officials and election judges who assist in the operation of voting centers during early voting and on election days, and individuals involved in the tallying, reporting, auditing and certification of election results.

Section 3. Creation or dissemination of deepfake in attempt to influence an election; violation 

Except as provided in Section 5, a person who commissions, aids or conspires to create and/or disseminate a deepfake or who enters into a contract or other agreement to create and/or disseminate a deepfake is guilty of a crime and shall be fined and sentenced as provided in Section 6 if the person knows that the item is a deepfake or acts with recklessly disregard concerning whether or not the item is a deepfake, and the creation and/or dissemination:

  1. takes place within 90 days before an election; and
  2. is made with the intent to defame, mock, threaten or impair a candidate’s reputation, safety or prospects in the election or an election official’s reputation or safety, or to improperly influence the outcome of the election.

Section 4. Creation or dissemination of a deepfake in an attempt to falsely impugn the integrity of an election or the outcome of an election; violation

Except as provided in Section 5, a person who commissions, aids or conspires to create and/or disseminate a deepfake or who enters into a contract or other agreement to create and/or disseminate a deepfake is guilty of a crime and shall be fined and sentenced as provided in Section 6 if the person knows that the item is a deepfake or recklessly disregards whether or not the item is a deepfake, and the creation and/or dissemination:

  1. takes place within ninety 90 days before or 180 days after an election; and
  2. is made with the intent to create a false impression of voter fraud, falsely claim or imply that an election has been stolen, falsely claim or imply that an election official has intentionally failed to carry out their election duties, falsely impugn the integrity or security of an election or balloting system, falsely discredit the veracity of electoral results or otherwise convey false information intended to erode confidence or participation in an election.

Section 5.  Exceptions

A person will not be convicted or found guilty of a violation of Section 3 or Section 4 if the deepfake is clearly and conspicuously labeled in writing with the following statement in a font size that is easily readable by a viewer and in no event smaller than the largest point font size used elsewhere in the deepfake or any part of the materials accompanying the deepfake: “Beware.  This message includes a deepfake that does not depict actual speech or conduct.”  If the message is audio only, the following statement shall be read in a clearly spoken and articulated manner and in a pitch and tempo that can be easily heard by the average listener at the beginning of the message and the end of the message: “Beware.  This message includes a deepfake that does not depict actual speech or conduct.”  If the audio is greater than two minutes in length, then the same message shall also be interspersed within the audio at intervals of not greater than two minutes each.

This law shall not be construed to alter or negate any rights, obligations, or immunities of an interactive service provider under Section 230 of Title 47 of the United States Code.

This law does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts a deepfake prohibited by this law as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of a bona fide news event, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are legitimate questions about the authenticity of the deepfake.

This law does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes a deepfake otherwise prohibited by this law, if the publication clearly states that the deepfake does not accurately represent the speech or conduct of the depicted individual in a manner that can be easily heard or read by the average listener or viewer.

This law does not apply to a deepfake otherwise prohibited by this law if it is clear to the average listener or viewer that the deepfake constitutes parody or satire.

Section 6.  Creation or dissemination of a prohibited deepfake; penalty

A person convicted or found guilty of a violation of Section 3 or Section 4 shall be fined up to $[XXXX], sentenced to imprisonment of up to [X] years, or both.  If the offense was undertaken for monetary gain, the person convicted or found guilty shall also forfeit any monetary gains procured from the commission of the crime.

Section 7. Injunctive relief

Injunctive or other equitable relief including a temporary restraining order and permanent injunction may be maintained against any person who is reasonably believed to be about to violate or who is in the course of violating this law by:

  1. the attorney general;
  2. a county attorney or city attorney;
  3. a depicted individual; 
  4. a candidate or election official who is injured or likely to be injured by the creation and/or dissemination of the deepfake; or
  5. any registered voter eligible to vote in the applicable election.

Section 8. Private cause of action  

A depicted individual in a deepfake prohibited by Section 3 or 4 may bring a civil cause of action against any person who creates or disseminates the offending deepfake.  If the depicted individual prevails in such civil action, the court may award damages in an amount equal to the greater of (a) actual damages or (b) an amount equal to the cost of creating and/or disseminating the deepfake that violated this law and any monetary gains procured from the creation and/or dissemination of the deepfake by the defendant(s), in addition to reasonable attorney’s fees and costs.

In any civil action brought by a depicted individual who is a candidate alleging a violation of this law, the candidate shall bear the burden of establishing the violation through clear and convincing evidence.

For purposes of an action for defamation, including libel and slander, brought by a depicted individual, a violation of this law shall constitute defamation per se.

Section 9. Education and awareness

The respective departments can initiate educational and awareness programs to elucidate the public about the existence of this law and the potential threats of political deepfakes to the democratic process.

Section 10. Severability  

The provisions of this law are severable.  If any provision of this law or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.

Section 11. Sunset review

This law will be reviewed after [X] years after its implementation to ensure its relevance and effectiveness as technology and society evolve.

Section 12. Enactment

This law is effective [XXXXXX], 202[X], and applies to crimes committed on and after that date.

Addendum II

Fact Sheet: Model Legislation Prohibiting the Creation and/or Dissemination of a Deepfake to Maliciously Harm a Political Candidate or Election Official, Improperly Influence the Outcome of an Election or Falsely Impugn an Election Outcome and Similar Actions

Summary

The model legislation seeks to combat the proliferation of deceptive ‘deepfakes’ that have the potential to improperly manipulate voting and undermine elections and election outcomes.

Background

Deepfakes – fabricated videos, sounds and images of someone appearing, without their consent, to say or do something they did not – are a dangerous technology with the potential to convincingly mislead or manipulate voters and to sow discord among an already hyper-partisan electorate.

Deepfakes distort the truth, making it difficult to distinguish between legitimate and fake media, and making it more likely that people will accept whichever aligns with their views. Deepfakes make it easier to pass off fake events as real and to dismiss real events as fake – a phenomenon dubbed “the liar’s dividend.”

The artificial intelligence tools used to create deepfakes are widely available online. As they continue to become more sophisticated, it will be increasingly difficult to combat the spread of artificially generated deceptive media.

The Model Legislation

The model legislation would prohibit a person, within 90 days before an election, from distributing a deepfake with knowledge that it is a deepfake or with reckless disregard concerning whether it is a deepfake if the deepfake or its distribution is made with the intent to injure a candidate’s reputation or prospects or an election official’s reputation or to improperly influence the outcome of an election, unless the media includes a specified disclosure stating that it includes a deepfake. Additionally, the model legislation would prohibit a person, within 90 days before and 180 days after an election, from distributing a deepfake with knowledge that it is a deepfake or with reckless disregard concerning whether it is a deepfake if the deepfake or its distribution is made with the intent to falsely claim an election was stolen or otherwise convey false information intended to erode confidence or participate in an election, unless the media includes a specified disclosure stating that it includes a deepfake.

The model legislation defines “deepfake” as a media that (1) “appears to a reasonable person to depict speech or conduct of a person who did not in reality engage in such speech or conduct or otherwise present something that did not, in fact occur” and (2) “if the creation, manipulation or alteration is of speech or conduct of a depicted individual who is a real person, was created or produced without the express consent of the depicted individual.”

The model legislation would authorize a candidate to seek injunctive or other equitable relief and bring an action for general or special damages against the person or entity that distributed the deepfake in violation of the law.

The model legislation provides specified exemptions for media outlets and providers that distribute an otherwise prohibited deepfake if specified conditions are met to avoid chilling legitimate news reporting. It also contains an exception for deepfakes that constitute parodies or satire.

The model legislation would authorize education and awareness programs about the law and the potential threats of political deepfakes. Once enacted, the legislation would remain in effect until a specified sunset date.

Existing State Deepfake Laws

Laws like the model legislation have already been enacted in California, Minnesota and Washington. Texas has a more perfunctory law prohibiting political deepfakes. Michigan enacted legislation that requires all qualified political advertisements “generated in whole or substantially by artificial intelligence” to contain a disclosure specifying same. Additional states have laws banning pornographic deepfakes.

The model legislation has been drafted to address the most common opposition faced by the laws in those states before they were enacted.

Proposed Federal Legislation

A bipartisan bill titled “The Protect Elections from Deceptive AI Act” was introduced in the U.S. Senate in September. The federal bill would prohibit the distribution of materially deceptive AI-generated media related to candidates for Federal office. As of January 26, 2024, the federal bill is under consideration by the Senate Rules and Administration Committee.

If the federal bill is eventually passed, it will only apply to federal elections. The model legislation also applies to state and local elections. Additionally, the federal bill does not apply to malicious deepfakes of election officials or deepfakes regarding election integrity that falsely impugn the outcome of an election. The model legislation does.

This document has been submitted to the Task Force for American Democracy for consideration and has been posted and/or circulated for information purposes only. The views expressed herein represent the opinions of the author(s) and not those of the Task Force or the ABA. They have not been reviewed or approved by the House of Delegates or the Board of Governors of the American Bar Association and, accordingly, should not be construed as representing the position of the Association or any of its entities. This publication is freely available to download, copy and distribute provided there is attribution to the ABA Task Force for American Democracy, and provided this notice is reproduced on all copies.

    N. David Bleisch

    About the Author

    Mr. N. David Bleisch has been Executive Vice President, Chief Legal Officer and Corporate Secretary at Office Depot, Inc. since September 20, 2017. Mr. Bleisch has been Vice President and General Counsel of Tyco's ADT North American Residential business segment since 2005. Mr. Bleisch served as the Chief Legal Officer and Senior Vice President of The ADT Corporation since September 1, 2012 until May 2, 2016 and served as its General Counsel until May 2, 2016. He served as Corporate Secretary of The ADT Corporation since September 1, 2012. He manages the intellectual property legal group for all of Tyco's operating segments worldwide. He joined Tyco in 2005 as Deputy General Counsel of Tyco Fire & Security. Mr. Bleisch was Senior Vice President, General Counsel and Secretary of The LTV Corporation in Cleveland, Ohio. Prior to joining LTV, Mr. Bleisch was a partner in the law firm of Jackson Walker LLP, where he served as a corporate transactional attorney before transitioning to commercial trial work. He served as Director of The ADT Corporation until September 28, 2012 and The LTV Corporation since December 13, 2001. Mr. Bleisch has a Bachelor of Arts from Carleton College and a Juris Doctor degree from Boston College Law School.