Perhaps the hottest topic in new technology in the healthcare industry is the development and use of artificial intelligence (AI). As AI continues to become an increasingly integral part of the healthcare industry (for example, through assisting in the interpretation of radiology studies, assessing population health issues, or refining billing and collection practices), developers of AI and healthcare providers alike should continuously assess the evolving regulatory and legal framework surrounding AI.
The purpose of this article is to provide an overview of the top 10 legal issues that healthcare providers, healthcare-related companies, and their counsel should consider when developing and/or using AI. This article summarizes in no particular order of importance:
1. Statutory, regulatory, and common law requirements
2. Ethical considerations
3. Reimbursement issues
4. Contractual exposure
5. Torts and private causes of action
6. Antitrust issues
7. Employment and labor considerations
8. Privacy and security risks
9. Intellectual property considerations
10. Compliance program implications
1. Statutory, Regulatory, and Common Law Requirements
As the current, direct regulation of AI technologies continues to evolve, healthcare providers and AI developers should (1) expect more legal and regulatory developments to occur as the use of AI technologies proliferate throughout the healthcare industry; and (2) be aware of the current statutory, regulatory, and common law requirements that may be implicated when considering the development and use of AI technologies. Depending on the functionality that the AI is discharging, there could be state and federal laws that require a healthcare provider or an AI developer to seek licensure, permits, and/or other registrations. As an example, in many circumstances, the Food and Drug Administration (FDA) could consider AI and other machine-learning (ML) based software to be medical devices requiring registration.1 Additionally, as AI functionality expands and potentially (in the distant future) replaces physicians in the provision of services, the question may arise regarding how these services are regulated, and whether the provision of such services would be considered the unlicensed practice of medicine or in violation of corporate practice of medicine prohibitions.2
2. Ethical Considerations
Where healthcare decisions have been almost exclusively human in the past, the use of AI in the provision of healthcare raises ethical questions relating to accountability, transparency and consent. In the instance where complex, deep-learning algorithm AI is used in the diagnosis of patients, a physician may not be able to fully explain to patients the basis of their diagnosis. As a result, a patient may be left not understanding the status of his or her diagnosis because such diagnosis did not originate from the physician. Further, it may be difficult to establish accountability between developers and providers when errors occur in diagnosis as a result of the use of AI. The AI is also not immune from algorithmic biases, which could lead to diagnosis based on gender or race or other factors that do not have a causal link to the diagnosis.3
3. Reimbursement Issues
The use of AI in both patient care and administrative functions raises questions relating to reimbursement by payors for healthcare services. How will payors reimburse for healthcare services provided by AI, or will they even pay for such services? Will federal and state healthcare programs (e.g., Medicare and Medicaid) recognize services provided by advancing AI technologies?4 While the authors are currently unaware of reimbursement being paid specifically for the use of AI technologies in the treatment of patients, historically payors have provided certain levels of reimbursement of technology-assisted services. This precedent could indicate that, as AI continues to develop, payors may eventually adjust their reimbursement models to account for or specifically reimburse for the use of such technologies.
Additionally, AI has the potential to affect revenue cycle management. In particular, there are concerns that errors could occur when requesting reimbursement through AI. For example, if AI is assisting providers with billing and coding, the provider could be exposed to certain False Claims Act (FCA)5 liability as a result of an AI error. If a provider uses AI technologies to mine data for billing and coding purposes, and such use results in consistent upcoding, the provider could potentially incur FCA liability. If such an error occurs, it may be ambiguous between the developer of the AI and the provider who used such AI as to who is ultimately responsible for such errors unless clearly defined contractually.
4. Contractual Exposure
As either a developer of AI or a healthcare provider utilizing AI, it is important to have clearly articulated contracts governing the sale and use of AI technology. Important contractual terms include:
a. Expectations regarding services — what are the specific performance metrics that are expected to be satisfied?
b. Representations and warranties — depending on the contracted relationships, the parties to a contract will expect to have representations and warranties appropriate to such context. For example, end-users of AI may require stronger representations and warranties from AI developers (from whom they may purchase or license AI technology) to allocate risk to the AI developer.
c. Indemnification — both a buyer and developer will need to negotiate how risk is allocated.
d. Insurance — because services performed by AI will have the same or similar risks as if a human counterpart were performing the services, a buyer/licensee will want to insure its business to cover those same risks. Similarly, AI developers will want to maintain appropriate insurance coverage that covers liabilities associated with the use of their AI technologies.
e. Changes in law —AI is rapidly developing, so parties should be prepared for changes in law affecting their contractual arrangements, and provide for flexibility or contingencies.
5. Torts and Private Causes of Action
If AI is involved in the provision of healthcare (or other) services, both the developer and provider of the services may have liability under a variety of tort law principles. Under theories of product liability (usually involving strict liability), a developer could potentially be held liable for defects in its AI that are inadequately planned or unreasonably hazardous to consumers. By comparison, query whether, at least for the near term, the AI developers will be liable for the “acts or omissions” of the AI itself? Certainly as AI evolves, tort theories could also evolve to hold the AI developers liable for what the AI actually does. As a result, those involved in the process (the developer and provider) will likely have exposure to liability associated with the AI. Whether the exposure is based in product liability or professional liability will likely depend on the functions the AI is performing. Further, depending on how the AI is used, providers may be required to disclose the use of AI to their patients as a part of the informed consent process.
As AI becomes more commonly used in the treatment of patients, it has the potential to impact what courts and the medical community view as the medical “standard of care” when determining whether a provider has committed malpractice. On the one hand, the increased use of AI may impose additional obligations on providers to use such technologies as they become a part of the standard of care. On the other hand, the integration of emerging AI technologies into a provider’s practice may also come with the additional risk of violating the standard of care, if such AI is not currently considered to be within such standard. Just like today’s physicians must stay current with the state of practice, the users of AI will need to make sure the AI stays current, i.e., that the AI independently “learns” and/or the users proactively teach it.
6. Antitrust Issues
The Antitrust Division of the Department of Justice (DOJ) has made remarks regarding algorithmic collusion that may impact the use of AI in the healthcare space.6 While acknowledging the fact that algorithmic pricing can be highly competitive, the DOJ has acknowledged that concerted action to fix prices may occur when competitors have a common understanding to use the same software to achieve the same results.7 As a result, the efficiencies gained by using AI with pricing information, and other competitive data, may be offset by the antitrust risks.
7. Employment and Labor Considerations
The use of AI in the workforce will likely impact the structure of employment arrangements as well as employment policies, training, and liability. AI may change the structure of the workforce by increasing the efficiencies in job performance and competition for those jobs (i.e., fewer workforce members are necessary when tasks are performed more quickly and efficiently by AI). However, the integration of AI into the workforce also may create new bases for litigation and causes of actions based on discrimination in hiring practices. For example, if AI is used in making hiring decisions (or contributes to such hiring decisions), how can the employer ensure that decisions based on any discriminatory characteristics are removed from the analysis? AI also may affect the terms of the employment and independent contractor agreements with workforce members, particularly with respect to ownership of intellectual property, restrictive covenants, and confidentiality.
8. Privacy and Security Risks
The use and development of AI in healthcare pose unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information, and other sensitive information. AI’s processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data. Such information may need to be de-identified, or alternatively, an authorization from the patient may be required prior to disclosure of the data via AI or to the AI.
Further, AI poses unique challenges and risks with respect to privacy breaches and cybersecurity threats. Because AI technology is all computer-, and in some instances, network-based, as AI continues to be used to enhance the services provided to patients, providers should be cognizant of the inherent risk of increased demands on data storage and network connectivity security issues. The more AI technology is used, the more opportunities bad actors will have to exploit vulnerabilities in providers’ security infrastructure, which could ultimately jeopardize patient information and further expose providers to liability for data privacy and security.
9. Intellectual Property Considerations
It is of particular importance for AI developers to preserve and protect the intellectual property rights that they may be able to assert over their developments (e.g., patent rights, trademark rights) and for users of AI to understand the rights they have to use the AI they have licensed. It also is important to consider carefully who owns the data that the AI uses to “learn” and the liability associated with such ownership.
10. Compliance Program Implications
As technology evolves, so should a provider’s compliance program. When new technology such as AI is introduced, compliance program policies and procedures should be updated based on the new technology. In addition, it is important that the workforce implementing and using the AI technology is trained appropriately. As in a traditional compliance plan, continual monitoring and evaluation should take place, and programs and policies should be updated pursuant to such monitoring and changes in AI.
As the use and development of AI grow in healthcare, so will this list of legal considerations. Consequently, healthcare providers, healthcare-related companies, and their counsel should keep a close eye out for corresponding, emerging laws and what it all means for AI.
- In its most recent publication, the FDA expressed its commitment to, among other things, (1) developing and updating the regulatory framework surrounding AI/ML-based software as a medical device; (2) developing “good machine learning practices;” and (3) supporting the development of methodologies for evaluating and improving ML algorithms. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Device (SaMD) Action Plan (January 2021), U.S. Food & Drug Administration, available at https://www.fda.gov/media/145022/download).
- At least to some extent, state law often prohibits non-licensed entities and persons from providing professional medical services to patients, which is referred to as a prohibition on the “corporate practice of medicine” or “CPOM.” In the context of AI, unlicensed AI technologies providing medical services for patients may implicate CPOM prohibitions if performed, or deployed, by lay entities/persons.
- A recent study discovered significant racial biases in certain AI technologies involved in allocating risk among Black and White patients. See generally Obermeyer, Z., et. al, Dissecting racial bias in an algorithm used to manage health of populations, Science (Oct. 25, 2019), available at https://science.sciencemag.org/content/366/6464/447).
- For example, there is a history of reimbursement being made for mammography screenings that utilized computer-aided detection technologies.
- See generally 31 U.S. Code § 3729.
- See generally Algorithms and Collusion – Note by the United States, Organisation for Economic Co-operation and Development (26 May 2017), available at https://one.oecd.org/document/DAF/COMP/WD(2017)41/en/pdf.
About the Authors
Ken Davis is a partner at Katten Muchin Rosenman LLP. He provides counsel on the formation of new businesses, joint ventures, networks, and management and other service relationships to integrate and improve the efficacy of healthcare. Advising on initial structuring and business model development, analysis of regulatory and reimbursement issues, private equity, debt-based and leasehold financing, and mergers and acquisitions, he represents physicians, hospitals, ancillary service companies, and other healthcare and e-health providers in transactions and regulatory matters. He also keeps his clients up to date on changing healthcare regulations, including the Stark Law, the federal Anti-Kickback Statute, HIPAA, the Affordable Care Act, and a range of state laws. He may be reached at [email protected].
Ashley Francois is a Katten associate who focuses on healthcare transactional and regulatory matters including mergers, acquisitions, corporate reorganizations, and joint ventures. She conducts and coordinates due diligence, prepares disclosure documents, and drafts a variety of healthcare-related agreements. She also advises clients on compliance with the Stark Law, the Anti-Kickback Statute, and HIPAA, as amended. She may be reached at [email protected].
Cheryl Camin Murray is General Counsel of The GI Alliance, the largest, physician-led gastrointestinal network in the United States. She is a former partner at Katten Muchin Rosenman LLP, where she counseled healthcare providers, financial institutions, and other businesses on entity formation, structural, contractual, licensing, and regulatory issues. She conducted client training on healthcare fraud and abuse, corporate governance, HIPAA compliance and state privacy and security matters. She also advised clients in regard to their clinically integrated networks and other arrangements and transactions in compliance with the Anti-Kickback Statute, the Stark Law, HIPAA and the HITECH Act, as well as other federal and state laws and regulations. She may be reached at [email protected].