chevron-down Created with Sketch Beta.
September 28, 2019

AI Use in Healthcare: Overview of Initial Steps to Develop AI Regulations/Guidances: Security and Safety Issues to Consider

By Patrick Ross, MPH, The Joint Commission, Washington, DC ; Kathryn Spates, JD, ACNP-BC, The Joint Commission, Washington, DC ; Rajadhar Reddy, MD Candidate, Baylor College of Medicine (The Joint Commission Health Policy Fellow), Washington, DC

Artificial intelligence (AI) is quickly becoming commonplace. Recent developments in computer processing capabilities and the ubiquity of data-collecting smart devices have unlocked the promise of big data to enable computer scientists to create algorithms and software that mimic cognitive functions and execute problem solving tasks that could previously only be accomplished by humans.

Healthcare organizations have been quick to realize the potential of AI to vastly improve healthcare delivery. Many organizations have started AI centers or appointed AI directors to help facilitate the use of AI in the healthcare system.  

Researchers are diligently working to expand the ways that AI can be used in a clinical setting. Early applications of AI in healthcare include aiding detection and diagnosis in medical imaging, such as the detection of diabetic retinopathy in screening images or computer-assisted classification of skin cancers.1 AI programs using natural language processing can turn unstructured text like clinical notes into machine-readable information. For example, the Department of Veterans Affairs utilizes natural language processing to review medical records and predict Veterans at high risk for suicide.2 Although the uptake of AI continues to grow, there are minimal regulations or guidance documents pertaining directly to AI in healthcare. While the healthcare industry waits for the regulatory sector to catch up, healthcare organizations should be aware of the some of the security and safety issues with AI use.

Overview of Initial Steps to Provide Guidance: FDA and other Stakeholders

Policymakers and stakeholders are grappling with how best to regulate software or at minimum provide guidance for the use of software that can continually learn and adapt as it receives new data, including whether it should be allowed to utilize this learning capability in real-world uses. The Food and Drug Administration (FDA) is the first agency to facilitate a discussion on greater regulations on AI in healthcare, focusing on modifications to AI-based technologies. AI-based technologies fall under purview of the FDA, and these technologies are typically approved as Software as a Medical Device (SaMD).3 The FDA has only approved AI-based technologies as a SaMD if their algorithms are locked. Locked AI technology does not engage in self learning as real world or new data become available. Rather, locked technologies rely on manufacturers to periodically and manually modify the algorithms. In contrast, unlocked technologies have defined learning processes that continually adapt to new information and feedback, thereby altering algorithms to provide more accurate outputs. While medical device regulations do not currently address adaptive technology, in April 2019 the FDA proposed a regulatory framework to streamline approvals for adaptive medical AI software while ensuring the safety and transparency of AI software.4 The framework relies on a new Total Product Life Cycle approach, which evaluates safety and effectiveness of the product throughout the life cycle. This approach is comprised of four basic steps:

  1. Good Machine Learning Practices (GMLP) establish the FDA’s expectations for the safe development and modifications of AI devices. Manufacturers may also apply for the Digital Health Software Precertification (pre-cert) program5 to allow for expedited review if they meet organizational quality and safety standards.
  2. New AI software undergoes premarket review to establish a reasonable assurance of safety and effectiveness. Manufacturers would also submit a proposed predetermined change control plan6 outlining methods for potential future software modifications.
  3. Software enters the market after FDA approval. Manufacturers monitor AI performance and use their approved safety and risk management strategies to guide any necessary changes to the underlying algorithm. These changes may require additional review by the FDA depending on the clinical impact or potential risk of the modification. 
  4. Real-world performance data is periodically submitted to the FDA to promote transparency and continued safety of the device.

Much of the AI regulatory framework remains undefined, such as what expectations will be included as part of the GMLP and what the reporting parameters will be for pre-certification and real-world data monitoring. Additional steps are needed to establish an oversight process by the FDA. Time will tell if other agencies step forward with guidance for the use of AI in the healthcare system .7  

Acknowledging the gap in guidance and regulations, stakeholders are putting forth proposals to ensure the safe and effective use of AI. In 2019, the Consumer Technology Association formed an Artificial Intelligence in Healthcare Working Group,8 which reflects the input of technology companies and healthcare organizations, with the goal of producing voluntary industry standards on AI usage.9

Several other associations have also weighed in. In June 2018 the American Medical Association passed a policy on the use of AI.10 The policy includes promoting the development of thoughtfully designed, high-quality, clinically validated healthcare augmented intelligence AI; encouraging education for patients and healthcare providers on the benefits and limitations of AI; and exploring the potential legal implications of AI in healthcare. The American Medical Informatics Association (AMIA) has recommendations related to AI. AMIA is supportive of the expansion of AI in healthcare but warns that additional steps must be taken to enhance cybersecurity, as AI may serve as a source of private health data and is vulnerable to having its data intentionally corrupted.11 AMIA has also stressed the need for different regulatory approaches to locked and continuously learning AI algorithms.12

Security and Safety Considerations with AI Use

AI holds the potential to help clinicians perform at higher degrees of accuracy and effectiveness than previously possible. However, this future depends on access to high quality data and the careful consideration of safety risks that stem from adding AI to human workflows.

Lack of Quality Data Input Impacts Reliability of AI Data

Access to large data sets with credible information is the linchpin of AI development. This “training data” is used to test algorithms and guide machine learning, meaning that the accuracy (and thus quality) of this information is paramount to the safety of AI devices. AI is only as good as the training data, leading to the common aphorism “garbage in, garbage out.” AI cannot exceed the performance level found in training data, which represents a best-case scenario for completeness and validity where a “ground truth” can be found for each data point. Compiling a sufficient number of accurate patient records to train and validate software is often the greatest logistical limitation in software development.13 Even though vast amounts of information are stored in electronic health records (EHRs), accessing this information may be difficult due to interoperability or privacy challenges. In addition, algorithms can reproduce unintended human biases in the underlying data. For example, as the algorithm is constructed biases could inadvertently become part of the learning process. Data entry errors also threaten to lower the utility of AI predictions.14

Potential Risk to Patient Privacy

Security protections are a serious concern for AI developers and users because healthcare data are primarily comprised of patient protected health information (PHI) protected by the Health Insurance Portability and Accountability Act of 1996. To ensure the security and privacy of patient information, PHI must be de-identified before it can be shared for uses such as training data for machine learning. Concerns have been raised that AI may make re-identification of patients easier. In the future, additional steps may be required to refine de-identification protocols to prevent linking patients with their data. Calls for increased patient control over information collection and use recently prompted the European Union to create the General Data Protection Regulation,15 which may foreshadow U.S. proposals to strengthen data ownership rights, including filtering permissions for the sharing and use of personal data and when to incorporate patient consent.

Potential for Medical Errors

Achieving high quality healthcare using AI tools will require the careful integration of software and human workflows. While AI programs such as shared clinical decision support systems can provide reliable information, machine learning places a greater emphasis on digital output in the decision-making process. This can lead to increased reliance on digital output in decision making and raises two interrelated quality concerns: “de-skilling” and “automation complacency.” De-skilling is the loss of skills after a task is automated.16 Automation can lead to a loss in clinician autonomy and/or result in adverse clinical events if a clinician is faced with an unexpected or abnormal algorithm output but has lost the clinical knowledge necessary to appropriately address the information. Automation complacency occurs when human users become overly reliant on AI support and stop looking for evidence once given machine-generated output. Automation bias combined with complacency can lead to errors where clinicians are unlikely to catch mistakes in a usually-reliable system.17 Avoiding dependence on machine input is critical to preventing medical errors and can be prevented by empowering clinicians to flag unusual predictions for review and to participate in ongoing skills-building activities.

Conclusion

The role of AI in healthcare is expanding rapidly and has the potential to radically change the day-to-day delivery of medicine. Early demonstrations have already shown that AI can help clinicians perform with greater accuracy and efficiency. However, the regulatory oversight of AI products is still in nascent stages. While the FDA works to develop a regulatory framework for AI software and other stakeholders provide important guidance, healthcare organizations should be aware of the potential safety and legal issues.

The opinions expressed in this article are the authors’ own and do not reflect the views of The Joint Commission.

Endnotes

  1.  MITRE Corp., Artificial Intelligence for Health and Health Care, Dec. 2017, https://www.healthit.gov/sites/default/files/jsr-17-task-002_aiforhealthandhealthcare12122017.pdf.
  2.  M. Ravindranath, How the VA uses algorithms to predict suicide, Politico (Jun. 25, 2019, 4:48PM), https://www.politico.com/story/2019/06/25/va-veterans-suicide-1382379.
  3.  AI-based technologies fall under the federal Food, Drug, and Cosmetic Act when the AI technology is used to treat, diagnose, cure, mitigate, or prevent disease or other conditions. Software as a Medical Device is software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device. FDA, Software as a Medical Device (SaMD) (last visited Aug. 15, 2019), available at https://www.fda.gov/medical-devices/digital-health/software-medical-device-samd.
  4.  Food and Drug Administration, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML) – Based Software as a medical Device (SaMD), April 2, 2019.
  5.  This is a pilot program announced in 2017 that allows the FDA’s Center for Devices and Radiological Health to pre-certify eligible digital health developers who demonstrate a culture of quality and organizational excellence based on objective criteria. The first part of the pilot program is limited to manufacturers of SaMDs. The program entered its test phase in 2019. https://www.fda.gov/medical-devices/digital-health/digital-health-software-precertification-pre-cert-program.
  6.  The predetermined change control plan would include the type of modifications anticipated by developers as well as the protocol the developer would follow to implement such changes. The algorithm change protocol would include algorithm re-training protocols, data management plans for new data collected and used for training, and performance evaluation benchmarks.
  7.  Although the FDA has jurisdiction of AI technologies classified as SaMDs, other agencies such as the Centers for Medicare & Medicaid Services (CMS), the Office of the National Coordinator for Health Information Technology (ONC) and the Agency for Healthcare Research and Quality (AHRQ) may provide guidances at some point. ONC and AHRQ have primarily relied on an independent group of scientists and academics to advise the agencies on how AI may impact healthcare delivery. In May 2019 CMS launched an AI contest to develop an AI tool that can predict patients’ healthcare outcomes and adverse events. However, the agency has yet to put out any guidance documents on AI use in the healthcare setting.
  8.  The Joint Commission is part of this working group.
  9.  Consumer Technology Association, CTA Brings Together Tech Giants, Trade Associations to Improve Efficiencies in AI and Health Care, April 4, 2019,  https://www.cta.tech/News/Press-Releases/2019/April/CTA-Brings-Together-Tech-Giants,-Trade-Association.aspx. The FDA does note in the proposed regulatory framework that statutory changes may be needed to implement some of the proposals.
  10.  American Medical Association, AMA Passes first policy recommendation on augmented intelligence, June 14, 2018, https://www.ama-assn.org/press-center/press-releases/ama-passes-first-policy-recommendations-augmented-intelligence. The AMA chose to use augmented intelligence versus the more commonly used term of artificial intelligence to highlight the assistive role of technology for clinician use.
  11.  American Medical Informatics Association, AMIA Response to FDA AI/ML SaMD Modifications Framework (2019), https://www.amia.org/sites/default/files/AMIA-Response-to-FDA-AIML-SaMD-Modifications-Draft-Framework_0.pd.
  12.  Id.
  13.  Michael van Hartskamp et al., Artificial Intelligence in Clinical Health Care Applications: Viewpoint, Interact J. Med. Res. 8, Apr. 5, 2019.
  14.  Robert Challen et al., Artificial Intelligence, Bias, and Clinical Safety, BMJ Qual. Saf. 28, 231-237 (2019).
  15.  European Union, General Data Protection Regulation, https://eugdpr.org/.
  16.  Trevor Jamieson & Avi Goldfarb, Clinical Considerations When Applying Machine Learning to Decision-Support Tasks Versus Automation, BMJ Qual. Saf. 0, 1-4 (2019).
  17.  Id. 

About the Authors

The opinions expressed in this article are the authors’ own and do not reflect the views of The Joint Commission.

Kathryn E. Spates is the Executive Director, Federal Relations at The Joint Commission’s Washington, D.C. office, where she analyzes the effects of legislation and regulations on The Joint Commission and healthcare entities across the continuum of care. She is responsible for building relationships with government agencies, Congress, healthcare organizations, and other stakeholders to further Joint Commission’s strategic opportunities. Ms. Spates responds to congressional inquiries and oversight actions, prepares comment letters, and navigates the legislative and regulatory processes. She previously worked at the Food and Drug Administration and as an attorney at a law firm in Washington, DC. Ms. Spates is also a nurse practitioner and has worked at academic medical centers, the National Institutes of Health, and the U.S. Peace Corps. She may be reached at [email protected].

Patrick Ross is a Senior Federal Relations Specialist at The Joint Commission’s Washington, D.C. office, working on healthcare quality and safety issues, including maternal health safety. Prior to The Joint Commission, he worked at the National Academies of Sciences, Engineering, and Medicine on the Board of Healthcare Services. Mr. Ross holds a Master in Public Health from the Harvard T.H. Chan School of Public Health. He may be reached at [email protected].

Rajadhar (Raj) Reddy served as a Health Policy Fellow at The Joint Commission’s Washington D.C. office, as part of the Health Policy Fellowship Initiative Program. He is currently a medical student at Baylor College of Medicine, Houston, TX. He may be reached at [email protected].