chevron-down Created with Sketch Beta.
July 21, 2017 Practice Points

Artificial Intelligence May Force Legal Community to Reconsider Rules of Evidence

In a few years, voice forgeries may be so good they can fool experts

by John F. Barwell

Artificial intelligence may soon force courts and lawyers to reconsider how voice recordings are authenticated and used as evidence. Three decades after Photoshop made it easy to manipulate imagery, a company in Montreal developed an artificial intelligence platform that lets users manipulate a person’s voice. This new and developing technology may affect lawyers in a variety of practice areas, and in a variety of ways.

According to its website, Lyrebird “can mimic a person’s voice and have it read any text with a given emotion, based on the analysis of just a few dozen seconds of audio recording.” Lyrebird is not alone. Late last year Adobe unveiled Project VoCo, a prototype for a similar software platform that can edit human speech like Photoshop alters images.

Audio recordings have played an important and persuasive role in legal matters for over 100 years. See Boyne City, G. & A.R. Co. v. Anderson, 146 Mich. 328, 330, 109 N.W. 429, 430 (1906). Such recordings have become so common and familiar that recent cases have developed more liberal standards for their admission. In one case, even unexplained defects in a tape recording did not prevent its admission into evidence. United States v. Traficant, 558 F. Supp. 996, 1002 (N.D. Ohio 1983).

Under the current federal rules, authenticating “an item of evidence” requires the proponent to “produce evidence sufficient to support a finding that the item is what the proponent claims it is.” Fed. R. Evid. 901(a). For voice recordings, lay witness opinion testimony “based on hearing the voice at any time under circumstances that connect it with the alleged speaker” is sufficient to establish a recording’s authenticity and have it admitted into evidence. Fed. R. Evid. 901(b)(5). But, research shows that a human’s ability to verify the voice of another is vulnerable to voice impersonation. In the end, however, whether a “recording is accurate, authentic and generally trustworthy” is left to the discretion of the trial court. United States v. King, 587 F.2d 956, 961 (9th Cir. 1978).

With so little “legal” scrutiny of voice-based evidence—and with so much room for error—one can easily imagine the potential impact of a perfectly-mimicked (but completely fake) audio recording in all sorts of legal disputes. From inculpatory statements in a criminal case to slanderous statements in a defamation case, the risk of admitting false audio recordings into evidence is a problem for which the legal community should prepare. According to Wired, “it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis.”

One way to reduce the risk of nefarious use of voice-mimicking software is to encourage (or require) its developers to include within the software a hidden function that automatically embeds forensic markers—like a digital watermark—into fake recordings. Forensic analysis could then examine the recording and opine on its authenticity based on the absence of such markers.

As artificial intelligence opens a new world of tech achievement, its consequences cannot be ignored. Lawyers will have an important role as this technology develops.

John F. Barwell is an associate at Polsinelli in Phoenix.


Copyright © 2017, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).