Criminal defense attorneys have also invoked the Confrontation Clause to oppose software-derived evidence, again likening the software to a witness whom the accused should be permitted to confront through cross-examination. These challenges are more likely to be successful where the software was created specifically for courtroom use—such as with DNA analysis—and thus is undeniably “testimonial” under the U.S. Supreme Court’s Crawford jurisprudence. Given the practical impossibility of cross-examining a computer program, however, counsel have had to be creative in proposing means to exercise the accused’s confrontation right. Demands have included pretrial disclosure of the software’s underlying source code and testimony of the software’s designer.
Critics have noted that notwithstanding the supposed “objectivity” of machine-derived evidence, a computer program is only as fair and equitable as it is designed to be, and thus, any form of software can be biased depending on its underlying algorithms. Facial recognition technology, for example, is frequently subject to higher rates of misidentification of female and minority subjects, likely because of skewed sample sets in its underlying programming. Although awareness of this issue has prompted several municipalities to ban facial recognition technology for police use, one can expect that where it is still employed, criminal defendants will challenge the admissibility of technology-derived identifications on grounds of both reliability and its potential disparate impact on particular demographics.
What Is Next for AI-Derived Evidence?
Without doubt, the diversity and complexity of technological evidence available to juries today dwarfs that in existence at Preston Quick’s trial in the 1950s. An outright ban on technology-derived evidence, therefore, would have the effect of needlessly depriving fact finders of relevant information—the very concern raised in Quick. In many cases, existing evidentiary rules and trial techniques are still adequate to test the reliability and credibility of machine evidence. As machines continue to take on new analytical capabilities independent of their users and designers, however, some new safeguards may be appropriate.
Some posit that pretrial access to software’s underlying source code would be sufficient to allow the party against whom it is introduced to scrutinize the logic and reasoning of its algorithms and, presumably, to expose any errors or biases in the advancing party’s evidentiary conclusions. But skeptics contend that this is wholly unnecessary, noting that software can be rigorously tested simply by directing the software to analyze a set of control data. The same skeptics also note that source code frequently contains proprietary and commercially sensitive components, and that requiring its disclosure will likely discourage companies from offering it for use in litigation.
Other scholars have contended that a heightened standard of admissibility—i.e., one that exceeds the prevailing Daubert standard for expert testimony—may be appropriate for certain forms of software-derived evidence. Cross-examination remains the primary means to challenge unreliable scientific or technical evidence, and thus, a more rigorous threshold test should apply where a litigant is, as a practical matter, unable to cross-examine the source of the evidence once it is admitted. As a supplemental screening tool, courts could demand, for example, evidence external to the software that corroborates its conclusions, akin to the new standard for the admission of hearsay under the residual exception of Federal Rule of Evidence 807. Until such time as machines can themselves be cross-examined, such a requirement may be courts’ best option to ensure both consistency and transparency in AI-based evidence.