As noted above, the Department published its proposal to regulate the use of race-based clinical algorithms in August 2022, opening it for public comment. The public comment period ended in October 2022, and HHS has not yet published a final regulation. One year later, however, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO), which provides a road map of legislative, regulatory, legal, and other actions aimed at addressing the benefits and risks associated with the development and use of AI by the federal government and the private sector. Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023). The EO’s scope is broad, covering various regulated industries (e.g., healthcare, financial services, transportation, technology) and issues (e.g., critical infrastructure, education, antitrust, housing, communications, consumer protection, privacy, and discrimination). It directs certain federal agencies to use their statutory and regulatory authorities to issue a range of guidelines, standards, frameworks, and other guidance documentation to ensure the development of “safe, secure, and trustworthy AI systems.” Id. at 75196. This includes several provisions directing the Department to take certain regulatory and other actions to address discrimination caused by AI.
For example, the EO directs HHS to establish an “AI Task Force” in consultation with the Secretary of Defense and the Secretary of Veterans Affairs to “develop a strategic plan … on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.” Id. at 75214. The AI Taskforce will “identify appropriate guidance and resources to promote that deployment in” several areas, including incorporating “equity principles in AI-enabled technologies used in the health and human services sector,” which includes “monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.” Id.
The EO also instructs HHS, in consultation with other “relevant agencies,” to consider taking certain regulatory actions “to advance the prompt understanding of and compliance with, Federal nondiscrimination laws by healthcare providers that receive Federal financial assistance, as well as how those laws relate to AI.” Id. at 75215. Those actions may include: (1) “convening and providing technical assistance to health and human services providers and payers about their obligations under Federal anti-discrimination and privacy laws as they relate to AI and the potential consequences of noncompliance,” id.; and (2) “issuing guidance, or taking other action as appropriate in response to discrimination complaints or reports of violations of Federal anti-discrimination law.” Id.
Together, these directives—that HHS establish an AI Task Force, “identify appropriate guidance,” “convene or provide technical assistance” to providers that use clinical algorithms, and “issue guidance or take other action” in response to discrimination complaints—may indicate that the Department will take a different regulatory approach to what it had proposed last year. The fact that the EO contains no reference to the Department’s pending Section 1557 rulemaking proceedings, which addresses the same issues, provides additional support for this theory.
As the Department recognized in its proposed rulemaking, this is “a complex and evolving area,” requiring consideration of several factors. For example, the proposed rule does not address the authority of the Food and Drug Administration (FDA) to regulate certain types of healthcare algorithms and whether and to what extent the proposed rule’s effort to address the discriminatory use of clinical algorithms potentially undermines the FDA’s medical device review authority. In addition, as the Department acknowledged, many providers lack the technical knowledge to discern whether a CDS tool’s use of race (or some other protected class input variable) results in unlawful discrimination or not. This sentiment was echoed in a recent Washington Post article about the use of AI in healthcare settings: “I don’t think that we even really have a great understanding of how to measure an algorithm’s performance across different race and ethnic groups,” one responder told researchers in the study of caregivers at hospitals including the Mayo Clinic, Kaiser Permanente, and the University of California at San Francisco. Pranshu Verma, Hospital Bosses Love AI. Doctors and Nurses are Worried., Wash. Post. (Aug. 10, 2023).
The EO’s regulatory approach mitigates some of these concerns by directing the Department to adopt a more deliberate and considered regulatory strategy, requiring the agency to collaborate with relevant agencies, convene and provide technical assistance to providers, and publish guidance regarding providers’ obligation to ensure that their use of clinical algorithms does not result in unlawful discrimination. Only time will tell how the Department will address this complicated issue.
“[The EO’s] directives .... may indicate that the Department will take a different regulatory approach to what it had proposed last year.”