Multilayered Issue Spotting in an Evolving AI Policy Landscape
While there is no current, universal law governing AI, there is a patchwork of privacy, consumer protection, health care, and health information technology laws and regulations that pre-date AI and apply to its use. In the absence of a broad-spectrum AI regulation, policymakers have efforts underway to address the gaps, including the White House Blueprint for an AI Bill of Rights, state laws, and voluntary commitments from leading AI companies to develop safe, secure, and trustworthy AI. On October 30, 2023, President Biden released a landmark executive order with an array of directives to establish standards for AI safety and security, including in the contexts of cybersecurity and health care, equity and civil rights, and privacy and the protection of consumers against fraud and deception. Several directives also create responsibilities for the Department of Health and Human Services (HHS) to create safety and assurance programs and oversight for health AI. On December 14, 2023, the White House announced that 28 health care provider and payer organizations had made voluntary commitments to help move toward safe, secure, and trustworthy purchasing and use of AI technology.
Assurance Laboratories and AI Oversight
In response to the executive order and discussion regarding regulation and oversight of AI in health care, AI assurance labs have surfaced as a prominent concept intending to standardize best practices for the development of trustworthy health AI, such as those developed by Coalition for Health AI (CHAI) and those from efforts like the National Academy of Medicine AI Code of Conduct. Assurance labs would be tasked with testing and evaluating AI tools in health care, enabling transparent reporting of the model, promoting regulatory guidance, and providing monitoring of ongoing performance of AI models to ensure intended objectives are achieved. Micky Tripathi, Assistant Secretary for Technology Policy, National Coordinator for Health Information Technology, and Acting Chief Artificial Intelligence Officer at Department of Health and Human Services (HHS), recently explained that the assurance labs are a part of the HHS’s strategic plan regarding the use of AI, which includes a focus on the gaps for ensuring responsible AI. While it remains to be seen whether the assurance labs will be public, private, or a hybrid of the two, the development of assurance labs is something to be monitored as part of overall AI governance.
Given the rapidly changing environment, it is essential to evaluate AI use cases using a multilayered case-by-case approach grounded in existing law and policy frameworks to help navigate today’s risks and prepare for future AI regulations. Below are a few areas to consider when evaluating health AI.
Privacy, Security, and Technology Risks
AI technologies depend on lots of data. Any use of health data throughout the development and deployment life cycle of AI technologies triggers privacy and security concerns. Some key points during the life cycle occur when collecting data for development, during training or fine-tuning a model, when testing AI technology, and when the technology is deployed. In this section, we highlight some regulatory developments of note that reflect some underlying concepts regarding the privacy, security, and technology risks of AI technologies, but this is by no means an exhaustive list of all recent regulatory activity.
Existing privacy requirements include statutes like the Health Insurance Portability and Accountability Act (HIPAA), the Federal Trade Commission Act, state consumer privacy laws, international laws such as the General Data Protection Regulation (GDPR), new and developing regulatory schemes, as well as contractual terms that may restrict how data can be used and disclosed, and how data should be protected.
FTC Activity
The Federal Trade Commission (FTC) has been active in applying its current authority regarding unfair and deceptive acts or practices found in Section 5 of the FTC Act to the risks AI technologies can pose. In particular, the FTC has focused on how critical an informed and transparently captured consent is for the ethical use of personal data in AI technologies. For example, in a February 13, 2024, blog post regarding use of collected data for AI purposes, the FTC highlighted that businesses cannot surreptitiously change the rules of engagement without informing individuals, stating, “A business that collects user data based on one set of privacy commitments cannot then unilaterally renege on those commitments after collecting users’ data.” In other words, prior consent does not equate to future consent for new use cases, including in AI technologies. The FTC has taken action where consent was not appropriately obtained. For example, in a 2021 settlement with Everalbum, Inc., the FTC alleged that the company used photos to develop facial recognition technology without users’ consent or knowledge. As part of the settlement, the FTC deployed algorithm destruction, raising the stakes of compliance by implicating Everalbum’s valuable intellectual property.
The EU AI Act
In addition to these already-existing requirements, new laws have come into effect that directly speak to privacy, security, and technology risks with AI technologies. The most prominent of these is the European Artificial Intelligence Act (EU AI Act), which went into effect on August 1, 2024. Using a risk-based approach, the EU AI Act creates four levels of risk. The first category, unacceptable risk, is reserved for those technologies considered to be a clear risk to the safety, livelihood, and rights of humans. These technologies are prohibited. The second category, high risk, focuses on certain categories of AI technologies. Of particular relevance to the health care and life sciences space, this category includes uses of AI technologies that are a safety component of a product or are a product—like a medical device—already required to undergo a third-party conformity assessment. High-risk AI technologies must meet several strict requirements before going to market, including undergoing a conformity assessment, registration of stand-alone AI systems in a database, and a declaration of conformity. The last two categories, limited risk and minimal/no risk, have less onerous burdens. With use of limited-risk AI technologies, such as a website chatbot, the EU AI Act requires transparency to end users, while the category of minimal/no-risk AI technologies has no requirements. The European Commission also is working with industry and civic stakeholders to create a Code of Practice to govern general-purpose AI models under the EU AI Act. As this Act has only recently gone into effect and will not be fully enforced until 2026, and additional resources like the Code of Practice are under development, there will be many lessons learned over the next few years as the AI technology industry and downstream users adjust to the requirements of the EU AI Act.
Identifiable Personal Data and AI Model Training
As mentioned at the outset of this section, one critical inflection point in the AI technology life cycle is when data are used to train AI models. While under HIPAA the training of AI models using protected health information (PHI) may, depending on the use case, fall under health care operations, it nonetheless behooves AI technology users to train and fine-tune their AI technologies using deidentified personal data. As pointed out Bennett and Matta, HIPAA restrictions on business associates, like data ownership and data aggregation restrictions, and HIPAA individual privacy rights, like the right to amendment, can be difficult to comply with if PHI has been used to train an AI technology. Further, if an individual exercises their right to delete and their personal data have been used to train or fine-tune an AI model, this raises the question regarding whether or not a model can truly forget information it has been trained on. Once a model is trained using data, particularly identifiable personal data, in order for the model to “forget” personal data (or other undesirable inputs), it must “unlearn” the data, or, in other words, be retrained. Retraining can be a time- and resource-intensive effort. Beyond these practical considerations, there are also reputational considerations. California’s recently signed AB 2013 will require AI developers (those who both create and fine-tune models) to list a description of the dataset used to develop the AI technology. This transparency requirement may bring additional scrutiny to entities’ AI training practices.
Integrating Privacy, Security, and Technology Governance into Operations
As applicable laws continue to be passed and the regulatory landscape evolves, how should health care legal professionals stay ahead of requirements while also helping clients continue forward progress in AI technology development? Fortunately, tools like the U.S. National Institute of Standards and Technology’s AI Risk Management Framework exist to help organizations better manage security, technology, and other risks associated with AI. The International Organization for Standards (ISO) also has several standards that speak to AI technologies. In addition, already-existing foundational privacy principles and practices can help guide. Including data sources for training, testing, and end-user use of AI technologies in your data map will help identify and keep track of key risks and considerations. This includes identifying the source of the data used, the permissions attached to that sourced data (e.g., was consent for the use captured appropriately; if an entity is a business associate, does its Business Associate Agreements grant appropriate rights for the business associate to use this information), and applicable state, federal, and international laws.
FDA and HHS Regulation of AI Medical Technologies
The Food and Drug Administration (FDA) creates requirements for AI technologies used in medical devices, including software as a medical device. For example, the FDA regulates AI-powered imaging tools that aid in diagnosis and software that uses patient-specific data to generate risk scores. A recent final rule (HTI-1) from the HHS Office of the National Coordinator for Health Information Technology (ONC) established new requirements for AI technologies that might be used in clinical, administrative, and operational contexts, and are supplied by developers of certified health information technology (e.g., EHR developers). Although ONC and the FDA take different approaches to regulating AI, both are generally focused on transparency and risk management.
ASTP/ONC Predictive Decision Support Intervention Framework
The HTI-1 final rule, issued by the ONC, went into effect on March 11, 2024, and establishes a new regulatory framework for certain AI and machine learning (ML) technologies that support decision-making in health care. Compliance is required beginning December 31, 2024.
The new framework, crafted as a certification criterion for technologies that qualify as “Decision Support Interventions” or DSIs, establishes a definition for Predictive DSIs and requires certified health IT developers that certify to the criterion and supply Predictive DSI as part of their health IT modules to (1) support and maintain source attributes (which are categories of technical performance and quality information about DSIs), (2) implement intervention risk management practices, and (3) make certain summary information about their risk management practices publicly available for each Predictive DSI.
The scope of health AI technologies impacted by the Predictive DSI framework maps to the definition of Predictive DSI, which means technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis. ASTP declined to limit the definition of Predictive DSIs based on risk, context of use, or the specific source or developer of the intervention.
In describing its policy basis for its rule, ONC identified concerns about growing evidence that predictive models introduce or increase the potential for a variety of risks that create unintended or adverse impacts on patients and communities. These risks can impact health care decisions in myriad ways, including through predictive models that exhibit harmful bias, are broadly inaccurate, have degraded due to model or data drift, are incorrectly or inappropriately used, or widen health disparities. The HTI-1 final rule is a major development in the regulation of AI in health care and has implications for organizations throughout the health and information technology industries.
FDA and AI-Driven Medical Products
FDA is tasked with ensuring the safety and effectiveness of many AI-driven medical products. The agency largely regulates software based on its intended use and the level of risk to patients if it is inaccurate. If the software is intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, FDA considers it a medical device. Most products considered medical devices and that rely on AI/ML are categorized as Software as a Medical Device (SaMD). Examples of SaMD include software that helps detect and diagnose a stroke by analyzing MRI images, or computer-aided detection (CAD) software that processes images to aid in detecting breast cancer. Some consumer-facing products—such as certain applications that run on a smartphone—also may be classified as SaMD. By contrast, FDA refers to a computer program that is integral to the hardware of a medical device—such as one that controls an X-ray panel—as Software in a Medical Device. These products also can incorporate AI technologies. Clinical decision support (CDS) software is a broad term that FDA defines as technologies that provide health care providers and patients with “knowledge and person-specific information, intelligently filtered or presented at appropriate times to enhance health and health care.” CDS software may overlap with Predictive DSI functionality regulated by the ASTP.
Following a series of AI-related publications in recent years, on March 15, 2024, the FDA’s Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) (the Centers) jointly published a paper—Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP Are Working Together—detailing the Centers’ four high-level priorities for a patient-centered, risk-based regulatory approach that strikes a balance between fostering responsible and ethical innovation and upholding quality, safety, and effectiveness.
State AI Laws
In the absence of a comprehensive federal law governing the development and/or deployment of AI, states have begun to pass their own laws that must be considered when organizations are considering the legal implications of AI. Notably, 2024 ushered in two different state laws with comprehensive requirements targeted at the development and/or deployment of AI. Utah enacted the Artificial Intelligence Policy Act, which establishes disclosure requirements when a consumer interacts with generative AI, and not a human, in a regulated occupation, including licensed health care providers.
Colorado passed the Colorado Artificial Intelligence Act, which requires developers and deployers of “high risk artificial intelligence systems” to use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination in the high-risk system. “High risk Artificial Systems” are those systems that, when deployed, make or are a substantial factor in making a decision that has a material legal or similarly significant effect on a consumer in “consequential” decisions, including health care. The Act establishes a rebuttable presumption of reasonable care if the developer or deployer complies with provisions in the act aimed at transparency and risk management practices, as applicable.
National Association of Insurance Commissioners
While it is likely that states will continue to enact specific laws directly regulating AI, many states have existing consumer privacy and protections laws that regulate certain AI development. Since the initial passage of the California Consumer Privacy Act of 2018 (CCPA), nineteen states have passed comprehensive privacy laws governing the protection of and rights to consumer’s personal information. While there are differences in the various state laws, some include the right for consumers to opt out of the use of personal information for purposes of “profiling” in furtherance of decisions that produce legal or similarly significant effects concerning a consumer. Certain states define profiling as any means of automated processing of personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. Another state law provides a similar consumer opt-out right for the processing of personal data for the purpose of profiling in furtherance of “solely” automated decisions. Given the rights consumers have to the use of their personal information in automated ways, organizations developing AI with the use of personal information must consider potential operational implications of honoring such consumer requests as well as additional requirements states impose on controllers of personal information, including conducting data protection assessments on the processing of such information.
On March 13, 2024, Utah’s Artificial Intelligence Policy Act was signed into law, which amends the Utah consumer protection and privacy laws to require disclosure to consumers, in certain circumstances, of AI use, effective May 1, 2024. Interestingly, the AI Act takes a bifurcated approach to the disclosure requirement. The law holds businesses and individuals at large to one standard, and it holds regulated occupations, including health care professionals, to another standard. The AI Act does not require individuals’ consent or directly regulate how generative AI is used once it is disclosed to patients.
In December 2023, the National Association of Insurance Commissioners (NAIC) adopted a Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. The model bulletin reminds insurance carriers that they must comply with all applicable insurance laws and regulations (e.g., prohibitions against unfair trade practices) when making decisions that impact consumers, including when those decisions are made or supported by advanced technologies, such as AI systems.
Intellectual Property Considerations
Before using data to train AI, organizations must evaluate whether using the data would violate contractual rights or infringe intellectual property (IP) rights. Answers to such questions are not always straightforward when multiple organizations contribute resources at different stages of the AI life cycle. Collaboration between data contributors/sources and AI system developers adds complexity to IP ownership and licensing issues, and existing legal frameworks are grappling with how to resolve IP questions raised by AI technologies.
Bias, Fairness, and Reliability
A major concern surrounding the use of AI in health care is the potential for bias. AI technologies are built on data that often reflect the inequities and biases that have long plagued U.S. health care. The risk of bias must be managed when training an algorithm, in determining whether a given use case is too high risk, and by monitoring AI’s performance once deployed. While AI has and continues to make rapid progress, machine intelligence remains narrower than human intelligence and empathy while appearing to demonstrate human reasoning ability. This poses risks in health care contexts as AI technologies may generate outputs that seem trustworthy but contain biased or unfair outputs, or inaccurate hallucinations.
The risk of bias also has been a focus outside the United States. For example, under the EU AI Act discussed above, high-risk AI technologies, as part of their required conformity assessments, will be required to demonstrate they are technically robust and appropriately trained and tested to minimize and account for the risk of bias. Singapore in its A.I. Verify self-assessment program includes as one of its principles “Fairness/No Unintended Discrimination.”
In April 26, 2024, HHS issued a final rule reinterpreting Section 1557 of the Affordable Care Act (ACA), which prohibits discrimination on the basis of race, color, national origin, sex, age, or disability, or any combination thereof, in a health program or activity, any part of which is receiving federal financial assistance. In the final rule, OCR establishes a general prohibition on covered entities’ discrimination on the basis of race, color, national origin, sex, age, or disability in health care programs or activities through the use of patient care decision support tools. OCR defines “patient care decision support tool” to mean any automated or nonautomated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities. The final rule also creates an ongoing duty of covered entities to make reasonable efforts to identify uses of patient care decision support tools that employ input variables or factors that measure race, color, national origin, sex, age, or disability. For each qualifying patient care decision support tool that is identified in the above inventory, covered entities must make reasonable efforts to mitigate the risk of discrimination resulting from the tool’s use.
Recognizing the need to give covered entities a reasonable time period to come into compliance with these new AI governance and risk management requirements, OCR is finalizing its requirements for patient care decision support tools with a delayed applicability date of May 1, 2025.
Following the applicability date for the patient care decision support tool requirements, OCR will assess each allegation that a covered entity is violating such requirements on a case-by-case basis. In the final rule, OCR recognizes the challenges that covered entities may face when attempting to identify the discriminatory potential of every use of each patient care decision support tool. When analyzing whether a covered entity is in compliance with the requirement to use reasonable efforts to identify in-scope uses of patient care decision support tools, OCR states it may consider, among other factors,
- the covered entity’s size and resources (e.g., a large hospital with an IT department and a health equity officer would likely be expected to make greater efforts to identify tools than a smaller provider without such resources);
- whether the covered entity used the tool in the manner or under the conditions intended by the developer and approved by regulators, if applicable, or whether the covered entity has adapted or customized the tool;
- whether the covered entity received product information from the developer of the tool regarding the potential for discrimination or identified that the tool’s input variables include race, color, national origin, sex, age, or disability; and
- whether the covered entity has a methodology or process in place for evaluating the patient care decision support tools it adopts or uses, which may include seeking information from the developer, reviewing relevant medical journals and literature, obtaining information from membership in relevant medical associations, or analyzing comments or complaints received about patient care decision support tools.
- The scope of patient care decision support tools as defined by OCR overlaps with the definitions for predictive decision support interventions and evidence-based decision support interventions in the HHS Office of the National Coordinator for Health Information Technology’s (ONC) recently published “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” rule. Specifically, the final rule’s definition for “patient care decision support tool” includes nonautomated and evidence-based tools that rely on rules, assumptions, constraints, or thresholds (evidence-based decision support interventions identified in ONC’s regulations), and health AI tools that support decision-making based on algorithms or models that derive relationships from training data and produce output that results in prediction, classification, recommendation, evaluation, or analysis (predictive decision support interventions in ONC’s regulations). While ONC’s regulations apply to and include requirements for health information technology developers, OCR’s final rule applies to and includes requirements for covered entity users of patient care decision support tools.
The final rule does not apply to tools used to support decision-making unrelated to clinical decision-making affecting patient care or that are outside of a covered entity’s health programs or activities. OCR provides some examples of tools that are likely out of scope, including automated or nonautomated tools that covered entities use for
- administrative and billing-related activities;
- automated medical coding;
- fraud, waste, and abuse detection;
- patient scheduling;
- facilities management;
- inventory and materials management;
- supply chain management;
- financial market investment management; or
- employment and staffing-related activities.
Responsible AI Governance
The responsible use of AI in health care requires the development of effective oversight programs. Whether your organization is developing, procuring, or deploying an AI-enabled technology, a use case–based approach to AI governance can help identify risks and inform how to manage them. The real risks of AI are not the apocalyptic visions or the machines taking over our careers and lives. The real risks will come from machines that are not yet smart enough to handle the responsibilities humans give them. Engaging in oversight throughout the AI life cycle, and subjecting AI to the same scrutiny as other new technologies using existing legal frameworks, can help manage these risks.