chevron-down Created with Sketch Beta.
December 25, 2023 Law and National Security

AI presents challenge for U.S. intelligence agencies

U.S. intelligence agencies that oversee secret operations must navigate a fine line when it comes to developing a framework for AI governance, said Ashley Deeks, former deputy legal adviser to President Biden’s National Security Council.

Margaret Hu (from left), Michael Groen, Ashley Deeks and Jocelyn Aqua discuss the impact of artificial intelligence on national security.

Margaret Hu (from left), Michael Groen, Ashley Deeks and Jocelyn Aqua discuss the impact of artificial intelligence on national security.

American Bar Association photo

“So much of what our national security agencies do, they have to do in secret in order to be effective,” Deeks said. Missions are often carried out in a national security “black box,” using undefined or classified methods. The growing use of artificial intelligence tools in the national security sphere creates what Deeks calls a “double black box,” since many artificial intelligence mechanisms are developed using proprietary, or sometimes secret, algorithms.

“What’s happening now is that we are introducing a different black box into our national security black box,” she said. “I think the question that we need to be asking is, how do we ensure that the AI systems in our national security agencies are going to comport with the values that we, as a country, want them to comport with?”

Deeks, an associate professor at the University of Virginia School of Law, spoke at the 33rd Annual Review of the Field of National Security Law conference in Washington, D.C., sponsored by the ABA Standing Committee on Law and National Security. The panel also included Jocelyn Aqua, a former privacy official at the Justice Department and principal for data risk, privacy and AI governance at PricewaterhouseCoopers; Marine Corps Lt. Gen. Michael Groen (retired), who served as commander of the Joint Artificial Intelligence Center; and moderator Margaret Hu, law professor at William and Mary Law School.

Groen said the Department of Defense has rigorous processes to develop AI, citing the 2020 release of Ethical Principles for Artificial Intelligence.  “It is really important that the Department of Defense is transparent,” he said. “We will not employ artificial intelligence that leads us away from our moral position (as) ethical warriors,” adding that AI is used by all branches of U.S. armed forces, on and off the battlefield.

“Are we willing to put young soldiers at risk if we have the technology to make them safer? … These are the kind of things that we continue to try to improve so that the Department of Defense and young service members are protected and able to fight effectively and morally on a modern battlefield.”

Related links: