The need for public participation in AI systems is acute. On October 30, 2023, President Joe Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI), calling for a coordinated, federal government-wide approach toward AI governance that requires “engagement with affected communities.” Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023).
For instance, to prevent unlawful discrimination in federal programs using AI, the Order requires agencies to “consider opportunities to increase coordination, communication, and engagement about AI as appropriate with community-based organizations; civil-rights and civil-liberties organizations; academic institutions; industry; State, local, Tribal, and territorial governments; and other stakeholders.” Id. § 7.2(a).
The Office of Management and Budget (OMB) issued a proposed memorandum to implement the Order providing that “agencies must consult affected groups, including underserved communities, in the design, development, and use of the AI, and use such feedback to inform agency decision-making regarding the AI.” Off. of Mgmt. & Budget, Exec. Off. of the President, Proposed Memorandum for the Heads of Executive Departments and Agencies, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence 19 (Nov. 1, 2023).
The OMB memorandum lists several mechanisms for soliciting ongoing public input, ranging from direct user testing to listening sessions to post-transaction customer feedback. And tellingly, the memorandum itself is open for public comment.
This emphasis on public participation echoes similar statements regarding AI in the White House’s AI Bill of Rights, the NIST AI Risk Management Framework, and Executive Order 14091 on Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. It also accords with Executive Order 14094 on Modernizing Regulatory Review, requiring that all federal agency “regulatory actions … be informed by input from interested or affected communities.” Exec. Order No. 14094, 88 Fed. Reg. 21879
(Apr. 6, 2023).
AI poses serious risks to society that can be identified and countered with public participation. AI outcomes can be inaccurate, biased, and discriminatory. The scope and scale of AI mean that algorithmic failures and embedded biases have far-reaching effects, well beyond the discretion of a single government bureaucrat. Further, many AI systems are “black boxes” whose results are not easily explained or understood, even by their designers. Moreover, the developers who build AI systems and the agencies that adopt them often lack the perspective to understand AI’s real-world impacts.
To improve the government’s use and oversight of AI, federal agencies must ensure that public participation is meaningful rather than performative. See Michele Gilman, Beyond Window Dressing: Public Participation for Marginalized Communities in the Datafied Society, 91 Fordham L. Rev. 503 (2022). In its robust form, public participation consists of processes for people most likely to be affected by a given system to have influence on the system’s design and deployment, including decision-making power.
Public participation enhances the quality of decision-making by including a wider range of perspectives. It adds legitimacy to decisions because people gain trust from processes they understand and impact. It improves accountability by adding layers of scrutiny and discussion between the public and decision-makers.