chevron-down Created with Sketch Beta.

Litigation News

Litigation News | 2025

Court Finds AI Expert Has “Fallen Victim” to AI

Nathaniel Yu

Summary

  • AI expert’s declaration struck from evidence for citing nonexistent sources.
  • In the underlying case, a social media influencer sought to enjoin enforcement of a statute that criminalizes the use of deepfakes intended to harm political candidates or influence election results.
  • The state opposed and submitted declarations from two AI experts, one of which included inaccuracies—which the plaintiffs pointed out and moved to exclude the proposed testimonies.
Court Finds AI Expert Has “Fallen Victim” to AI
insta_photos via Getty Images

Jump to:

Failure to conduct due diligence when using generative artificial intelligence (AI) may result in stricken testimony as exemplified in a federal district court decision. ABA Litigation Section leaders concur with the court while discussing the implications of the cautionary tale.

Deep Fakes and Fake Citations

In Kohls v. Ellison, the U.S. District Court for the District of Minnesota heard a preliminary injunction motion brought by a social media influencer and a Minnesota State Representative. They sought to enjoin enforcement of Minnesota Statute Section 609.771, which criminalizes the use of deepfakes intended to harm political candidates or influence election results. As active participants in the 2024 presidential election cycle, the plaintiffs published AI-generated content directed at a conservative audience. They believed their actions are prohibited under the statute and claim that the statute violates their constitutional rights.

The state opposed the motion and submitted two declarations from AI experts. Both declarations provided an overview of AI technology and explained how deepfakes endanger democracy. One of them included inaccurate representations. In one of the declarations, two cited sources were nonexistent, and one authorship attribution was erroneous. The plaintiffs brought this to the attention of the court and moved to exclude the proposed expert testimony.

The expert, a professor of communication at Stanford University, admitted that he used Chat GPT-4o when drafting the declaration and overlooked the mistakes during his review. To salvage the declaration, the state requested leave to amend by citing excusable neglect.

The Truth and Nothing But the Truth

The court denied leave to amend and struck the problematic filing. “The irony,” the court began. The professor, “a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less,” it continued.

Particularly troubling to the court was the professor’s failure to abide by the same level of scholastic integrity in the declaration compared to when he writes an academic article. He did not use reference software, as he typically does in his academic work, to verify the content of the declaration. Compounding the issue, the document was submitted under penalty of perjury. The mere existence of false representations, regardless of innocent mistake, effectively “shatters [the professor’s] credibility” with the court. Underscoring the seriousness of court filings, the court explained that it expects heightened diligence when submitting documents under penalty of perjury.

Although the Attorney General’s Office claimed to have no knowledge of the fake citations, the court reminded the attorneys, “Federal Rule of Civil Procedure 11 imposes a ‘personal, nondelegable responsibility’ to ‘validate the truth and legal reasonableness of the papers filed’ in an action.” A reasonable inquiry, the court suggests, may require attorneys to examine the accuracy of the content and know whether witnesses used AI.

The court noted that this decision represents yet another voice to “a growing chorus of courts around the country” declaring that AI-generated content in legal submissions needs verification.

A Chorus Singing the Same Cautionary Tale

“Courts are trying to send a message that they are not going to tolerate attorneys using AI as a substitute for their own judgment,” says Joseph V. Schaeffer, Pittsburgh, PA, Co-Chair of the Litigation Section’s Pretrial Practice & Discovery Committee. This is “certainly a tough decision to take if you are on the state side and wanted to use that affidavit. But given the issues that the court identified with the citation accuracy, it’s understandable,” he continues.

The court’s finding that the professor’s credibility was destroyed “seems like a reasonable conclusion to me frankly,” says Lorelie S. Masters, Washington, DC, Member of the Section’s Federal Practice Task Force. She insists that “there must be some accountability” when documents are submitted under penalty of perjury. Because of the ruling, she questions whether the professor can “fail to mention this decision” when being cross examined in future cases.

Masters frequently serves as an expert witness. “I’ve done these reports before,” she asserts. “We need to do our jobs even if we are relying on technology like artificial intelligence,” she emphasizes. Masters drafts her own reports even though it may be more costly. At the end of the day, “I’m putting my name on it,” she says.

In part, the decision “ignores the practical reality” of how affidavits are produced, worries Schaeffer. “It ignores the fact that often attorneys are the ones who are taking the first stab at an affidavit and then are relying on the client to verify that everything is stated accurately,” he contends. In the reverse scenario, when the client creates the first draft, it may not be reasonable to require attorneys to specifically ask affiants about their AI use. “The real question,” he concludes, is if the representations are true to the affiant’s personal knowledge.

Resources

    Author