Patient Privacy and Informed Consent
Patient autonomy, privacy, and informed consent must be considered as healthcare providers continue to utilize AI/ML and predictive algorithms to diagnose and treat patients. Healthcare providers must clearly communicate potential risks, benefits, and outcomes of particular treatments to patients and caregivers to support their ability to make informed decisions about their care.Both the physician and patient (or caregiver) must understand the effectiveness of the AI/ML system and how it will be incorporated into the patient’s care, any limitations of the AI/ML, whether the patient’s information will be secure, and if there is any potential for bias or harm. The 2023 American Medical Association Principles for Augmented Intelligence Development, Deployment, and Use (AMA Principles) caution that “[w]hen AI is utilized in health care decision-making, that use should be disclosed and documented in order to limit risks to, and mitigate inequities for, both physicians and patients, and to allow each to understand how decisions impacting patient care or access to care are made.”Privacy risks associated with using AI/ML, including whether algorithms will be trained on identifiable data, should be assessed in accordance with the Health Insurance Portability and Accountability Act (HIPAA) and other applicable state and federal privacy laws. Risks associated with breach and the likelihood that an individual could be reidentified in the event of breach should be considered, and as AI/ML’s ability to reidentify datasets improves and the technology advances, serious consideration should be given to the individual patient’s privacy risks where PHI or sensitive health data was used to train AI/ML. The ABA has previously reported on AI and data privacy concerns in the November 2023 edition of eSource.
Clinical Decision Making
Generative AI/ML can augment clinical decision making through enhanced diagnostic tools and alternative treatment options. Data or images may be fed into an AI/ML diagnostic model, and the AI/ML model can predict outcomes or assist in diagnosis by identifying patterns or identifying potential disease by tying together seemingly unrelated symptoms. AI/ML can also assist providers with review of diagnostic tests, such as with AI-enabled digital pathology or mammography. While the provider will still make the ultimate diagnosis, these platforms can be used to identify and magnify cells or anomalies that might be overlooked by the human eye or apply predictive algorithms to identify early cell changes that, based on prior outcomes, may be more likely to become malignant. Without AI/ML assistance, providers may be limited by their own diagnostic experience and the current standard of care, which can result in diagnostic decisions based on static decision trees.Generative AI/ML often incorporates data across hundreds or thousands of sources, updates in real time or near-real time, continually building the knowledge base and library of information on which diagnostic decisions are based. For example, hospitals across the U.S. perform approximately 3.6 billion imaging procedures annually, and those images can be used to simultaneously train and update the AI/ML, improve detection capabilities, and reduce error.Giving providers access to greater sources of data and emerging standards of care can improve diagnosis and treatment outcomes.
For better or worse, patients have already turned to AI/ML to expand their personal access to medical information and diagnosis. For example, in 2023, in a widely reported case, a mother took her child to visit seventeen doctors after the child complained of a persistent toothache and experienced stunted growth. None of the doctors were able to diagnosis the child’s ailment.The mother turned to ChatGPT, entering her son’s symptoms into the AI/ML chatbot, which in turn suggested her son may be suffering from a rare neurological condition, tethered cord syndrome. Soon after, a neurosurgeon confirmed the diagnosis.
Empowering patients and caregivers to research their own symptoms helps them advocate for themselves, yet providers can feel pressured to order diagnostics and procedures that may be unnecessary because “Dr. ChatGPT” suggested a rare and unlikely condition. ChatGPT does not consider context or nuanced symptoms when making “diagnoses,” and its results rely on the accuracy of the patient prompt.Patients might confuse medical terminology, or omit certain symptoms, and return wildly varying results that can lead a provider to chase “zebras rather than horses,” as the saying goes.
Providers should be careful not to rely too heavily on generative AI/ML. The AMA Principles clearly state that “[c]linical decisions influenced by AI must be made with specified human intervention points during the decision-making process. As the potential for patient harm increases, the point in time when a physician should utilize their clinical judgment to interpret or act on an AI recommendation should occur earlier in the care plan.”AI/ML should not provide the clinical decisions, but the provider should instead incorporate the AI/ML recommendation into a treatment plan or diagnosis rather than defer to the AI/ML outright. Where generative AI/ML is poorly trained, or its intended use is misunderstood by developers or providers, the risk of an incorrect diagnosis or inappropriate treatment regimen can increase, which could increase liability risk for providers who blindly defer to the technologies or who rely on them well outside the standard of care.These vulnerabilities and the likelihood for error should be assessed and appropriately disclosed to the patient, and alternative options should be considered where appropriate.While the potential for increased liability has resulted in some providers resisting the use of AI/ML altogether,as AI/ML becomes incorporated into the standard of care, it may be riskier for providers not to use such tools.
There are also concerns as to whether insurers are using AI/ML predictive tools appropriately for claims review, or whether they are being used as cost-saving measures to improperly deny patients coverage for medically necessary services.Class action lawsuits have been filed against two large health insurers by patients alleging that the insurers used an AI algorithm to automatically deny patients if they did not meet certain preset criteria, often overriding provider determinations and recommendations.This alleged use of AI/ML to review and process claims has stoked fears that claim denials will increase, and its use in an already controversial preauthorization system could delay time-sensitive but costly treatment.
Conclusion
AI/ML in healthcare delivery has the potential to improve patient care and outcomes. Over the past few years, and especially in the wake of ChatGPT, excitement about innovation in this space has been unparalleled. However, the risks give some providers pause as AI/ML has demonstrated bias and raises concerns about patient privacy, autonomy, adequacy of informed consent, as well as safety and efficacy concerns that could increase provider liability. Thus, as pressure to use AI/ML in healthcare delivery increases, stakeholders and regulators must continue their efforts to balance the risks and benefits while remaining focused on providing care with a human touch.