chevron-down Created with Sketch Beta.

ABA Health eSource

Health eSource | August 2024

Artificial Intelligence in Healthcare: Ramifications for Governance, Quality of Care and Patient Safety, and Compliance and Ethics

Corey Perman, Carrie Uebel, Thomas F O'Neil III, Robert Yates, and Yanqing Lei

Summary

  • Providers are already using AI to assist in patient care including reading and interpreting diagnostics, developing personalized medicine and treatment plans, and robot-assisted surgery.
  • AI has proven to be useful for healthcare administration in two main areas: virtual assistants and workflow automation.
  • Ethical challenges exist when AI is used in healthcare, including patient privacy and security, bias, explainability, and accountability.
  • Healthcare organization’s AI governance framework should include safeguards for data protection, privacy and security risk reviews; guidelines for internal use of AI; employee training and education; and detection and mitigation of biases.
Artificial Intelligence in Healthcare: Ramifications for Governance, Quality of Care and Patient Safety, and Compliance and Ethics
Wong Yu Liang via Getty Images

Jump to:

In recent years, artificial intelligence (AI) has exploded into the zeitgeist, with applications being developed for nearly every facet of human life. The healthcare sector has been at the epicenter of these seismic events, and their potential consequences and ramifications are profound. With its endless applications and use cases, AI presents many opportunities for improved efficiency and quality of care for life sciences firms, healthcare providers, and payers of healthcare services; AI also presents a range of risks and ethical concerns that must be addressed. When seismic technological advancements like this occur, public policy, understandably, needs time to optimally develop. During this period, industry participants must identify and assess key strategic opportunities, monitor the competitive landscape, discern stakeholder expectations, and address current operational and future regulatory risks.

The AI applications for quality of care and compliance and ethics programs are particularly interesting given the administrative burden required to design, implement, and maintain programs that are effective, scalable, and sustainable—and sufficiently flexible to address critical legal, regulatory, and public policy developments. AI can considerably aid quality of care by enabling more incisive and timely monitoring of patient data to detect anomalies. AI can also enhance compliance and ethics program effectiveness by enabling new approaches in data analysis to promptly identify patterns and trends suggesting potential fraud, waste, and abuse so that they can be assessed and, as warranted, addressed before they become systemic.

AI presents an unprecedented opportunity to alleviate existing administrative burdens that are often cited as leading causes of turnover among healthcare workers and to improve the quality of patient care. While these are laudable goals, the large initial capital expenditures, inconsistent current federal and state policies on AI, incomplete or biased data sets, and patient privacy concerns pose significant challenges at this juncture.

This article provides a foundational background on AI and its applications in healthcare before discussing AI’s ramifications on healthcare governance, quality of care, patient safety, compliance, and ethics. 

Overview

AI is a technology that characterizes machines and computers to simulate human intelligence and problem-solving capabilities. Using algorithms, it makes predictions based on large datasets to execute a desired outcome, but it is not equivalent to human cognition. An algorithm is a finite, step-by-step procedure for solving a mathematical problem, frequently involving the repetition of an operation.

“AI” is not a monolithic term. Rather, it includes a suite of technologies that are enabled by AI. These include, but are not limited to, machine learning (a subset of AI that gives computers the ability to perform tasks without explicit instructions), deep learning (a branch of machine learning that uses logical structures of algorithms, based on the structure of the human brain, called artificial neural networks), and conversational agents/chatbots that simulate human conversation. Each type of AI-related technology has its own set of applications that build upon each other and carry out different functions.

Today, in the healthcare industry, machine learning is mainly used for predictive analytics that focus on historical data to identify patterns and make predictions that support patient outcomes, such as disease outbreaks or determining the most effective treatment plans for patients.

Generative AI, by contrast, can produce new content, such as images, text, or even synthetic medical data, based on patterns discerned from existing data. Deep learning, among many other uses, can utilize generative AI to analyze and learn from visual data such as medical images, real-time video feeds, and patient photographs to assist physician decision making in complex situations. Natural language processing is a type of deep learning that, when applied in healthcare, extracts and learns from information such as clinical notes, medical records, research reports, and is used to inform decisions that impact quality of care. An example is Google DeepMind, which supports physicians and improves patient outcomes by analyzing patients’ medical images and spotting health problems early on. Together, the collection of AI-related technologies in the healthcare industry has great potential to support physicians’ work and improve the quality of patient care.

While there has been increasing attention on AI in healthcare in recent years, from a chronological perspective, it is not a new technology. AI, especially its applications in healthcare and medicine, has evolved dramatically over the past five decades. The term AI was coined by a group of computer scientists at Dartmouth College in 1956. The subsequent advent of machine learning and deep learning (both under branches of AI) laid the strong foundation for the growth and utilization of AI in the healthcare industry. For instance, the causal-associational network (CASNET) model in 1976 was one of the first AI programs that used observations of a patient, pathophysiological states, and disease classification to help physicians interpret complex cases of diseases. Shortly thereafter, patients’ physical medical records were transitioned to electronic health records (EHR), providing healthcare professionals with digital access to clinical, epidemiological, and outcomes research for individual patients and various populations. Leveraging this rich, new data source, companies have developed clinical decision support software that enables healthcare providers to access and analyze large volumes of clinical data in addition to emerging research.

The exponential development and adoption of AI technology in healthcare and medicine is a profound technological advancement—akin to the advent of the internet—and will prove to be a significant disruptive force over at least the next 10 years.

Industry Perspectives

As the number of AI applications and use cases continues to grow exponentially, the healthcare industry is confronted with an extraordinary opportunity to reimagine and redesign how treatments are developed and how care is delivered. At the same time, the adoption of AI in healthcare poses significant challenges that we are only just beginning to understand.

Opportunities

The global market for AI in healthcare has been projected to grow from a total value of $20.4 billion today to $148 billion by 2029, $287 billion by 2031, and $431 billion by 2032. Already, one of the largest players in healthcare is currently planning and testing over 500 AI use cases across its organization. The spectrum of opportunities for AI in healthcare include the areas of life sciences, care delivery, and administration.

Life Sciences

In the life sciences segment of the industry, researchers are beginning to utilize AI in the development of new drug therapies. Traditional drug discovery is information-intensive and time-consuming, which also makes it expensive. Recent reports suggest that global pharmaceutical research and development annual spending totaled $244 million in 2022. Recognizing the potential for disruption in drug development, the U.S. Food and Drug Administration (FDA) has provided guidance in discussion papers and requested feedback from industry participants to collaborate on further industry guidance. While this guidance is certainly helpful, it is just that, and it lacks the full force and effect of a duly enacted statutory and regulatory framework.

Meanwhile, as the state of regulations and guidance evolve, AI could supercharge drug discovery with faster and more efficient molecular data analysis, therapeutic target discovery, and clinical trial optimization. This is already starting to happen. A recent study found that, in a universe of 102,454 investigational and approved drugs, manufacturers reported that 165 new drugs were developed using AI, with one of those already receiving approval. Most commonly, AI was used for molecular discovery and for drug target discovery, but AI was also reportedly used for molecular design, biomarker identification, patient stratification, and clinical outcome predictions and analysis.

A similar phenomenon is well underway with medical devices. As with drug development, the FDA sought to get ahead of the curve in April 2019 by proposing a regulatory framework for AI- or machine learning-based “software as a medical device (SaMD),” and requested feedback from industry participants. This framework introduces several important concepts, including “Good Machine Learning Practices” (GMLP) and a predetermined change control plan. The latter includes SaMD pre-specifications—the types of anticipated modifications based on the retraining and model update strategy and associated methodology—and the algorithm change protocol that is being used to implement those changes in a controlled manner that manages risks to patients. In January 2021, the FDA published an action plan for SaMD that incorporated feedback from industry.

SaMD is already making an impact on the practice of medicine. As of late 2023, the FDA has approved nearly 700 AI/ML-enabled medical devices, with well over 400 of those approved since 2020. Most SaMD (i.e., over 75% of approvals) is for use in the radiology subspecialty, while cardiology is a distant second place. SaMD is most often used for image processing, including ultrasonic and nuclear magnetic resonance, computed tomography, and computer-assisted triage and notification. Specific examples of approved SaMD include imaging tools like ContaCT, which detects strokes from computed tomography (CT) brain images and notifies doctors, and FibriCheck, which analyzes heart rhythms to detect atrial fibrillation; tools to manage diabetes like IDx-DR, which detects diabetic retinopathy, and the Guardian Connect System, which monitors blood glucose; orthopedic tools like OsteoDetect, which aids in the diagnosis of wrist fractures; and wearable monitors like the Embrace2, which detects signs of seizures.

Interestingly, per the 21st Century Cures Act, administrative support, electronic health records, and clinical decision support (CDS) are exempt from the FDA’s definition of device and the regulatory framework proposed for AI/ML-enabled SaMD. Examples of exempt technologies could include software for patient scheduling, practice management, or maintaining financial records for administrative support; a mobile application that allows providers to access their patient’s health records on a web-based platform for electronic health records; or software that can analyze patient and general medical data to provide or support medical recommendations for CDS. As CDS continues to grow in popularity and influence, it could soon be a target for further FDA regulation or guidance due to its growing direct impact on patient care.

Care Delivery

Providers are already using AI to assist in patient care delivery. Current uses include CDS, reading and interpreting diagnostics, developing personalized medicine and treatment plans, and robot-assisted surgery.

AI-powered CDS tools can enhance a provider’s knowledge by synthesizing emerging relevant research, treatment guidelines, risks, past similar cases, and learned health patterns, often much more efficiently than humans. Advanced imaging CDS tools could help radiologists flag acute anomalies faster, leading to better prioritization of cases and faster treatment of more urgent cases.

Along the same lines, AI can be used to analyze images such as X-rays, MRIs, and CT scans along with data from electronic health records to detect patterns, anomalies, risks, and even diseases. For example, AI has been used to diagnose diabetic retinopathy in eye scans and identify skin conditions based on images of skin lesions.

AI is also showing promise in personalized medicine. It can tailor treatment options, care planning, and drug dosage recommendations based on an analysis of a patient’s genetic data, lifestyle, medical history, and other individual factors.

Robot-assisted surgeries are increasingly common, and AI-guided surgical systems are now enabling surgeons to complete more complex procedures, with more precision, and with minimally invasive techniques. AI-powered robots are also being developed to assist patients with physical therapy.

Administration

While we are still in the early phases of the journey, thus far, AI has proven to be useful for healthcare administration in two main areas: virtual assistants and workflow automation. AI-powered virtual assistants can be used to assist patients with a multitude of issues, including answering simple questions, providing basic health advice, scheduling appointments, and generally acting as a receptionist. AI can greatly reduce administrative burden by automating time-consuming tasks like patient flow, bed availability, and insurance authorization. As healthcare organizations increasingly focus on provider well-being to prevent rampant turnover, AI-powered clinical documentation tools are perhaps the most promising in this category by allowing providers to quickly and accurately document patient encounters.

Challenges

For all the opportunities, AI also presents a number of thorny challenges requiring careful consideration by regulatory agencies, healthcare leaders, providers, and patients. These challenges fall generally into two categories: production and business ethics.

Production

Developing an AI tool for clinical medicine is a resource-intensive endeavor. Time is one factor. Developing, training, and deploying an effective AI model is a lengthy process that includes steps like data cleaning, engineering product features, tuning hyperparameters, and debugging the code. Funding is another critical factor, as AI development and deployment demands significant cutting-edge computing resources. By 2028, estimates predict that the AI industry will consume more energy than the entire country of Iceland did in 2021.

Data is perhaps the most important resource consideration for AI production. While the provision and administration of healthcare generates a dizzying amount of data, obtaining quality data for training medical AI can be difficult for several reasons. Patient data is often fragmented and siloed across various systems and sites of care, and the data commonly contain incomplete or inaccurate information. Medical experts are required to annotate images and flag adverse events in time series data for training AI models, which can cause bottlenecks and added expense. Add in the growing number of privacy regulations around medical data and the small sample sizes of certain conditions, and the costs of collecting high-quality medical data for training AI models can become prohibitively expensive.

Ethical Considerations

Society has identified several ethical challenges with AI, especially as it is adopted in healthcare including patient privacy and security, bias, explainability, and accountability. Although these are the most important concerns today, it is important to remember that we are in the early stages of AI adoption, and many unknowns still exist that we have yet to encounter.

The large amount of medical data required to train AI models raises questions regarding patient privacy and security. Not only must healthcare providers remain vigilant in protecting patient information from unauthorized use, access, and disclosure, they must ensure that AI systems are protected from cyberattacks. AI developers can utilize anonymized patient records and federated training methods to mitigate against the risk of inappropriately disclosing patient data.

Algorithmic biases can lead to inaccurate or unfair decisions, which could lead to harmful outcomes for patients. Bias can be introduced at any stage of development—problem formulation, training, model design, optimization, evaluation, and deployment—and must be continuously monitored to avoid patient harm. Examples of potential harm caused by biased AI tools include recommending lower levels of care for marginalized patients and discharge models that underestimate risks for certain populations.

Advanced AI systems can lack explainability and transparency. Deep neural networks underlying an AI model can be programmed with millions of parameters and complex, nonlinear interactions between data points. This creates a number of problems, including difficulty in validating results and improving the model, as well as general patient distrust of AI.

These issues culminate in the question of accountability: who is to blame when an AI error causes patient harm? Given the growing intertwined role of AI systems and the decision making of healthcare providers, determining accountability, and, ultimately, legal liability, could be a daunting challenge in any circumstance.

Healthcare leaders must begin to think strategically about enabling their organizations to capitalize on AI’s opportunities while mitigating the significant risks that it poses. Governing bodies, quality and patient safety teams, and compliance and ethics teams are uniquely positioned to perform that task, as we discuss in the following section.

Ramifications for Governance, Quality of Care and Patient Safety, and Compliance and Ethics

As healthcare companies grapple with the opportunities and challenges presented by AI, an organization’s governing body should develop a framework for deciding when and how to adopt AI and overseeing such tools once they are implemented. Along the same lines, management teams should diligently study new use cases for implementation at the operational level as well as dashboard tools to summarize and monitor key performance measurements. Although a majority of companies across industries are considering or currently implementing a governance framework, less than 50% of organizations have a functional AI governance framework in place.

Governance

While the 21st Century Cures Act and the related Health Data, Technology, and Interoperability (HTI-1) rule regulate certain aspects of AI usage, the United States has not yet adopted a comprehensive AI regulatory scheme along the lines of the European Union’s AI Act. That will no doubt change, but as the mosaic of federal and state regulations evolves, organizations seeking to implement AI in healthcare delivery and administration should engage with their governing bodies to self-govern and establish ethical boundaries with a defined set of practices, policies, and operating framework. 

Governing bodies, often, but not always, referred to as boards of directors, owe their organizations the fiduciary duties of care, loyalty, and obedience. The Caremark doctrine clarifies that the duty of care includes reasonable oversight of the organization, and directors may be held liable for losses incurred that could have been prevented with such oversight. Reasonable oversight requires the analysis of adequate information. Governing bodies may rely on management or outside advisors to supply such information, provided their reliance is not misplaced. If a matter requires further inquiry, directors should continue asking questions until they are satisfied that the issue has been mitigated or resolved.

Given the risks inherent in the adoption of AI technologies, an organization’s governing body should be informed and involved in the development of an AI governance framework to ensure the ethical, responsible, and effective deployment of AI. At the top of that framework sits the governing body, its advisors, and its standing committees, including their executive liaisons. In order to effectively oversee the adoption of AI, healthcare governing bodies may consider recruiting a director with the appropriate technology expertise. Due to the profound technological shift that AI brings, all directors should take the initiative to become conversational in the evolving technology, especially as it is being applied in healthcare. If recruitment or education is not practical, the governing body should consider retaining an outside advisor to guide the organization as it adopts and deploys AI technologies. Depending on the size of the governing body, it should also consider forming a committee that is responsible for overseeing the organization’s implementation of AI tools.

After the governing body, the management team is the next layer of the AI governance framework. The executive leadership team and its management committees are responsible for developing policy and executing strategy. Like governing bodies, management teams should consider creating cross-functional teams to effectively manage AI, ensuring representation from business units like the clinical team, information technology, information security, data science, legal, and compliance.

In terms of policy, a healthcare organization’s AI governance framework should include safeguards such as policies and processes that establish data quality, collection, and storage reviews; data protection, privacy and security risk reviews; guidelines for internal use of AI; employee training and education; and detection and mitigation of biases. These policies and processes will require frequent reviews and updates given the dynamic nature of both the technology and its regulatory landscape.

In addition to safeguards, management teams will need to develop policies on the issue of transparency with patients. As AI is increasingly deployed in healthcare, patients are likely to seek information regarding which parts of their care are administered by humans and which are performed, or enhanced, by AI. Providing this knowledge transparently, perhaps during an informed consent discussion, could prove to be a competitive differentiator in a rapidly changing marketplace for healthcare services.

Healthcare organizations typically form committees dedicated to managing quality of care, patient safety, ethics, and compliance. These management committees are uniquely positioned to be both adopters and overseers of AI technology.

Quality of Care and Patient Safety

Depending on the particular industry segment, organizations in the healthcare industry typically have a committee dedicated to managing patient safety, the quality of care provided, and/or affordable and equitable access to healthcare services. As priorities move away from volume-based healthcare towards value-based care, the quality of care provided now underpins not only the conditions of participation in federal healthcare programs, but also the reimbursement to the provider for the care they provide.

As discussed above, AI has an enormous opportunity to positively impact the quality of care provided and increase access to healthcare services. Quality of care and patient safety teams should therefore be actively involved in the development and deployment of clinical AI tools under the organization’s AI governance framework. Like their organization’s governing bodies, quality and patient safety teams may find a need for recruiting new team members with the AI expertise to allow the team to successfully manage and deploy new AI-enabled tools. Such expertise could obviously include computer science and neural networks, but could also include medical ethics so that the group can effectively consider issues of bias when utilizing tools like AI-powered CDS.

Quality and patient safety teams should consider developing their own AI-powered dashboards to monitor key performance indicators in their organization’s data compared to internal and external benchmarks. Many organizations already create these dashboards manually, if they have the resources. These dashboards could have a significant impact on organizations that have access to vast amounts of data, but currently lack the resources to analyze it in a meaningful way.

Compliance and Ethics

Over the last 20+ years, federal healthcare regulatory and enforcement agencies, including the Department of Justice (DOJ), the Office of the Inspector General (OIG) of the Department of Health and Human Services, and the Centers for Medicare and Medicaid Services (CMS), have taken the lead in articulating expectations and developing standards for corporate compliance and ethics programs, with at least one state agency joining the dialogue in recent years. With varying degrees of clarity and certainty, they have created meaningful incentives for organizations, along with potentially impactful penalties.

Beyond merely avoiding potential penalties, emerging research proves that robust compliance and ethics programs create substantial enterprise value. Consumers will pay a premium for products and services that come from companies with strong compliance and ethics programs. Applying those findings to healthcare organizations, consumers could pay a premium for services provided by organizations that they know protect their sensitive information, and investors could factor that consumer preference into the valuation for a healthcare company with a robust compliance and ethics program.

A robust compliance and ethics program must include, at minimum, the Seven Elements as elucidated by the OIG. The Seven Elements of an effective compliance program infrastructure are:

  • Written Policies and Procedures;
  • Compliance Leadership and Oversight;
  • Training and Education;
  • Effective Lines of Communication with the Compliance Officer and the Disclosure Program;
  • Enforcing Standards: Consequences and Incentives;
  • Risk Assessments, Auditing, and Monitoring; and
  • Responding to Detected Offenses and Developing Corrective Action Initiatives.

In addition to the Seven Elements, the federal agencies now also emphasize Tone at the Top, commitment to continuous improvement, and effective governance and oversight. Tone at the Top refers to the values and culture established by management, setting a foundation that compliance and ethics are a shared responsibility for the entire organization. A commitment to continuous improvement can be demonstrated with holistic annual risk assessments, root cause analyses when systemic issues are suspected, and periodic testing of the compliance and ethics program to determine whether any risk areas have developed over time.

As discussed above, the organization’s governing body is responsible for ensuring effective governance and oversight. The board should already have a robust communication channel established with the organization’s chief compliance officer (CCO), and the CCO’s reports should routinely describe any risks related to the organization's deployment of AI technology. We expect the DOJ’s upcoming revised guidance on corporate compliance programs will likely include a similar oversight recommendation upon release.

Healthcare compliance and ethics programs should also play a role in the organization’s AI governance framework. In collaboration with the information technology and information security teams, the compliance and ethics team should be charged with monitoring compliance with the organization’s AI-related policies and procedures, as well as applicable laws and regulations.

Compliance and ethics teams should make use of AI-powered dashboards just like their quality and patient safety counterparts. Compliance and ethics dashboards should monitor organization-specific high-risk areas, as informed by its annual and/or ongoing compliance risk assessments. AI can help streamline the analysis and visualization of key performance indicators gleaned from compliance disclosures and investigations, billing and coding data, and conflicts of interest information, among other reports and sources of data.

Beyond dashboards for monitoring data, AI could significantly streamline the day-to-day operations of the compliance and ethics team. For example, AI could be used to summarize process reviews and walkthroughs in compliance audits, analyze data sets for trends and themes, assist in developing investigation summaries, and identify emerging risks in an expedited manner.

Conclusion

Forward-thinking governing bodies and management teams are already approaching AI similarly to how companies approached the internet when it started to become clear that a technological revolution was underway. Irrespective of industry, size, or complexity, corporate governance, compliance and ethics, and risk management teams are considering the opportunities and challenges that AI poses to their businesses.

Once lawmakers, regulators, and enforcement agencies accelerate their engagement with the topic of AI, we expect a swift proliferation of regulatory developments and related guidance. Legal and compliance professionals must remain vigilant in this evolving regulatory landscape while acknowledging that our system of federalism and shifting political winds inherently invites some level of inconsistency. In highly regulated industries like healthcare, success will likely be limited to those organizations with the foresight and diligence to prepare for the ethical issues identified here. Foresight necessitates that organizations assume the societal responsibility of education from diverse perspectives about the capabilities and possibilities of AI. Diligence requires that organizations invest in key resources and remain vigilant to adapt to an ever-changing industry that has human consequences. The governing bodies, quality and patient safety teams, and compliance and ethics teams will all play a crucial role in developing a robust AI governance framework to ensure patient safety, optimize care delivery, and shepherd their organizations through this foundational shift in technology. Their decisions today will shape how AI changes the delivery of healthcare over the next one to two decades.

    Authors