chevron-down Created with Sketch Beta.

Administrative & Regulatory Law News

Fall 2023 — Ready or Not, Here Comes AI

Predicting the Future of HHS Regulation of Race-Based Algorithms

Tamra Moore

Summary

  • Providers use diagnostic algorithms to guide clinical decision-making and assess individual health risks in various medical specialty areas, including cardiology, nephrology, obstetrics, and oncology.
  • The U.S. Department of Health and Human Services published its proposal to regulate the use of race-based clinical algorithms in August 2022.
  • Executive Order 14110 directs certain federal agencies to use their statutory and regulatory authorities to issue a range of guidelines, standards, frameworks, and other guidance documentation.
Predicting the Future of HHS Regulation of Race-Based Algorithms
fotog via Getty Images

Jump to:

In 2020, the New England Journal of Medicine published an article that highlighted the “potential danger” associated with “diagnostic algorithms and practice guidelines that adjust or ‘correct’ their inputs based on a patient’s race or ethnicity.” Darshali A. Vyas et al., Hidden in Plain Sight—Reconsidering the Use of Race Correction in Clinical Algorithms, 383 N. Engl. J. Med. 874 (2020). Providers use these algorithms to guide clinical decision-making and assess individual health risks in various medical specialty areas, including cardiology, nephrology, obstetrics, and oncology. The problem is that there is “mounting evidence that race is not a reliable proxy for genetic difference” and that the use of race-adjusted algorithms “fail[s] to account for genetic diversity within racial and ethnic groups and … reinforce[s] stereotypes.” Tina Hernandez-Boussard et al., Promoting Equity in Clinical Decision Making: Dismantling Race-Based Medicine, 42 Health Affairs 1369 (2023). Consequently, there is growing concern that algorithms that adjust or correct for race or ethnicity may lead to inaccurate estimates of clinical risk and “treatment patterns that are inappropriate, unjust, and harmful” to certain racial and ethnic patient groups, further exacerbating “persistent racial and ethnic disparities in health and health care in the United States.” Id.

Last year, the United States Department of Health and Human Services (HHS or the Department) issued a proposed rule to tackle this issue head-on using its authority under Section 1557 of the Patient Protection and Affordable Care (Section 1557). Section 1557 prohibits “covered entities” from discriminating based on race, color, national origin, sex, age, or disability in health programs or activities. 42 U.S.C. § 18116(a). The statute defines a “covered entity” as any healthcare entity or organization that receives federal financial assistance (e.g., Medicare and Medicaid) and includes hospitals, health clinics, health insurance issuers, state Medicaid agencies, community health centers, physician’s practices, and home health care agencies that participate in the Medicare or Medicaid programs. The Department has described Section 1557 as “the government’s most powerful tool to ensure access to and coverage of health care in a nondiscriminatory manner.” 87 Fed. Reg. 47824, 47880 (Aug. 4, 2022). In its August 2022 notice of proposed rulemaking, the Department outlined its reasons for using this powerful statutory tool to propose a “new provision” that would address the use of discriminatory clinical algorithms” by holding covered entities liable for discrimination under Section 1557. Id.

The Department explained that it was “critical to address this issue explicitly” given “recent research demonstrating the prevalence of clinical algorithms that may result in discrimination.” Id. at 47825. As the Department observed, “[c]linical algorithms are used for screening, risk prediction, diagnosis, prognosis, clinical decision-making, treatment planning, health care operations, and allocation resources, all of which affect the care that individuals receive” and that many of these tools explicitly use race and ethnicity as input variables in a way that may lead to discrimination based on race and ethnicity. Id. at 47881. One example is a tool that evaluates kidney function by “adjusting the score for Black patients, making their kidneys register as 16 percent healthier than white patients’ kidneys. Id. The tool fails to account for the fact that “Black Americans are about four times as likely to have kidney failure as white Americans and make up more than 35 percent of people on dialysis while representing only 13 percent of the U.S. population.” Id. As the Department explained, the use of this race-adjusted tool not only reduces the number of Black patients placed on transplant lists, but it reduces the number of patients referred for kidney disease management, nephrology specialists, and dialysis management.” Id. These potential harms to Black patients in the United States led the National Kidney Foundation and the American Society of Nephrology to create a task force that recommended an approach that does not use race. See id.

The Department also noted the potential discriminatory impact of “clinical algorithms used under state Crisis Standards of Care plans” during the COVID-19 pandemic, which may have “screen[ed] out individuals with disabilities.” Id. at 47880-881. State Crisis Standards of Care are formal guidelines or policies “adopted during an emergency or crisis that substantially impact usual health care operations and the level of care it is possible” to deliver under the circumstances. Id. at 47881. In emergency or crisis conditions such as the COVID-19 pandemic, States use these crisis standards, which may include algorithms, flow charts, and other assessment tools, to prioritize patients for scarce resources in a way they could not do under non-emergency conditions. The Department explained that during the pandemic, it received complaints and requests for technical assistance from individuals with disabilities and older adults concerning several states’ use of their respective Crisis Standards of Care plans. Id.

Relying on these and other examples, the Department proposed to prohibit explicitly covered entities from using clinical algorithms to inform healthcare decision-making in a discriminatory way in violation of Section 1557. The Department acknowledged that although “individual providers are not likely to have designed the clinical algorithms that augment their clinical decision-making,” under its proposed rule, covered entities are responsible for ensuring “that any action they take based on a clinical algorithm does not result in” unlawful discrimination. Id. at 47883. At the same time, the Department “recognize[d] that the use of race-adjusted clinical algorithms is “a complex and evolving area that may be challenging for covered entities to evaluate for potential violations of Section 1557,” and that the agency “shares a responsibility in working with” federal financial assistance recipients and others “to identify and prevent discrimination based upon the use of clinical decision tools and technological innovation in health care.” Id. Nevertheless, the Department warned that if promulgated as currently proposed, a covered entity could violate Section 1557 by failing to ensure that its use of a clinical algorithm does not result in unlawful discrimination.

As noted above, the Department published its proposal to regulate the use of race-based clinical algorithms in August 2022, opening it for public comment. The public comment period ended in October 2022, and HHS has not yet published a final regulation. One year later, however, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO), which provides a road map of legislative, regulatory, legal, and other actions aimed at addressing the benefits and risks associated with the development and use of AI by the federal government and the private sector. Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023). The EO’s scope is broad, covering various regulated industries (e.g., healthcare, financial services, transportation, technology) and issues (e.g., critical infrastructure, education, antitrust, housing, communications, consumer protection, privacy, and discrimination). It directs certain federal agencies to use their statutory and regulatory authorities to issue a range of guidelines, standards, frameworks, and other guidance documentation to ensure the development of “safe, secure, and trustworthy AI systems.” Id. at 75196. This includes several provisions directing the Department to take certain regulatory and other actions to address discrimination caused by AI.

For example, the EO directs HHS to establish an “AI Task Force” in consultation with the Secretary of Defense and the Secretary of Veterans Affairs to “develop a strategic plan … on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.” Id. at 75214. The AI Taskforce will “identify appropriate guidance and resources to promote that deployment in” several areas, including incorporating “equity principles in AI-enabled technologies used in the health and human services sector,” which includes “monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.” Id.

The EO also instructs HHS, in consultation with other “relevant agencies,” to consider taking certain regulatory actions “to advance the prompt understanding of and compliance with, Federal nondiscrimination laws by healthcare providers that receive Federal financial assistance, as well as how those laws relate to AI.” Id. at 75215. Those actions may include: (1) “convening and providing technical assistance to health and human services providers and payers about their obligations under Federal anti-discrimination and privacy laws as they relate to AI and the potential consequences of noncompliance,” id.; and (2) “issuing guidance, or taking other action as appropriate in response to discrimination complaints or reports of violations of Federal anti-discrimination law.” Id.

Together, these directives—that HHS establish an AI Task Force, “identify appropriate guidance,” “convene or provide technical assistance” to providers that use clinical algorithms, and “issue guidance or take other action” in response to discrimination complaints—may indicate that the Department will take a different regulatory approach to what it had proposed last year. The fact that the EO contains no reference to the Department’s pending Section 1557 rulemaking proceedings, which addresses the same issues, provides additional support for this theory.

As the Department recognized in its proposed rulemaking, this is “a complex and evolving area,” requiring consideration of several factors. For example, the proposed rule does not address the authority of the Food and Drug Administration (FDA) to regulate certain types of healthcare algorithms and whether and to what extent the proposed rule’s effort to address the discriminatory use of clinical algorithms potentially undermines the FDA’s medical device review authority. In addition, as the Department acknowledged, many providers lack the technical knowledge to discern whether a CDS tool’s use of race (or some other protected class input variable) results in unlawful discrimination or not. This sentiment was echoed in a recent Washington Post article about the use of AI in healthcare settings: “I don’t think that we even really have a great understanding of how to measure an algorithm’s performance across different race and ethnic groups,” one responder told researchers in the study of caregivers at hospitals including the Mayo Clinic, Kaiser Permanente, and the University of California at San Francisco. Pranshu Verma, Hospital Bosses Love AI. Doctors and Nurses are Worried., Wash. Post. (Aug. 10, 2023).

The EO’s regulatory approach mitigates some of these concerns by directing the Department to adopt a more deliberate and considered regulatory strategy, requiring the agency to collaborate with relevant agencies, convene and provide technical assistance to providers, and publish guidance regarding providers’ obligation to ensure that their use of clinical algorithms does not result in unlawful discrimination. Only time will tell how the Department will address this complicated issue.

“[The EO’s] directives .... may indicate that the Department will take a different regulatory approach to what it had proposed last year.”