chevron-down Created with Sketch Beta.

ABA Health eSource

Health eSource | November 2024

Artificial Intelligence in Healthcare Systems: Opportunities and Challenges for Clinical Providers and Legal Compliance

Elizabeth Murray

Summary

  • The rapid improvements and integration of AI in healthcare systems promise to reduce physician burnout, reduce healthcare costs, and improve outcomes.
  • Clinicians and health law attorneys must maintain compliance with regulations as new instruments receive approval.
  • Regulations may require evolution to ensure patient safety.
Artificial Intelligence in Healthcare Systems: Opportunities and Challenges for Clinical Providers and Legal Compliance
Dana Neely via Getty Images

Jump to:

Introduction

Artificial intelligence (AI) in healthcare is a trendy topic; however, AI integration into medicine, patient care, clinical practice, and documentation is a familiar and improving function. This article serves as a reference for health law attorneys and compliance professionals to:

  • Illustrate the various ways that AI is moving to incorporate into healthcare systems
  • Encourage health law attorneys and compliance professionals to familiarize themselves with the available resources to ensure software and applications are authorized for use in healthcare systems
  • Spotlight the possibilities and pitfalls of AI applications in use or development for billing and coding optimization
  • Assist determination whether an online resource or information is reliable and accurate
  • Highlight a few new academic studies of interest regarding the integration of AI in healthcare

Artificial Intelligence in Practice

Until now, AI integration in electronic health records (EHR) and diagnostics required significant provider input, such as clicking override in pop-up warnings for physician orders and pharmacy workflow, a final radiology reading by a physician, or a machine interpretation of an EKG requiring review by a healthcare professional for clinical correlation. The rapid improvements and integration of AI in healthcare systems promise to reduce physician burnout, reduce healthcare costs, and improve outcomes. Compliance with regulations as new instruments receive approval is at top of mind for clinicians and health law attorneys. However, regulations may require evolution if patient safety remains a priority during implementation.

AI is found almost everywhere in healthcare systems. AI-enabled tools focused on radiology lead the industry charge, with over 100 radiology-related AI companies and over 400 radiology AI algorithms approved by the U.S. Food and Drug Administration (FDA). To date, the FDA has authorized 950 artificial intelligence and machine learning (AI/ML)-enabled medical devices. Hospital systems and sole providers use machine learning (ML)-enabled medical devices and AI-enhanced chatbots to create documentation, orders, coding, and billing. Currently, the main roles of AI in medicine are:

  • Diagnostic assistance
  • Predictive analytics
  • Personalized treatment plans
  • Clinical workflow optimization

The approval and adoption of AI/ML tools, however, has not relieved medical providers’ liability for malpractice. In a recent Texas-state case, In re Acclarent, 2024 WL 2873617 (Tex. App. 2024), a Texas appellate court eventually denied the plaintiff’s request for pre-complaint depositions of the device manufacturer involved in the alleged medical negligence. Still, joint liability among AI/ML vendors and medical providers in some jurisdictions seems inevitable. Absent legislation, future complaints will likely explore the dual nature of medical negligence and product liability and include discovery requests for AI/ML vendor information in negligence cases. The wide-ranging implications for potential hybrid medical malpractice and product liability cases suggest a new “wild west” regarding venues, jurisdictions, and statute of limitations.

The FDA maintains a public list of AI/ML-enabled medical devices that meet the FDA’s “applicable premarket requirements, including a focused review of the devices’ overall safety and effectiveness, which includes an evaluation of appropriate study diversity based on the device’s intended use and technological characteristics.” The FDA updates its list periodically, though it is not meant to be an exhaustive or comprehensive resource of medical devices that incorporate AI/ML. The list provides a solid resource for a compliance officer or health law attorney to begin with when reviewing new software or dealing with a negligence case involving such products.

AI/ML in Provider Documentation: Billing and Coding Optimization Possibilities and Pitfalls

Billing and coding, especially for evaluation and management (E/M) for physician visits, remains susceptible to fraud and abuse. The complexity of E/M coding, which is based on the documentation elements, can lead to incorrect or even fraudulent coding in some cases. AI/ML applications seek to automate the process. However, nearly all billing and coding experts agree that any implementation of artificial intelligence in coding should include a human coder in charge of the result.

Epic, one of the best-known electronic medical record systems, is an industry leader in developing AI technology, improving documentation, and optimizing billing/coding. Epic EHR can use AI to generate progress notes using ambient listening technology from patient/provider visits in the exam room. In other words, the AI drafts a progress note using the information gleaned from the provider and patient conversation. Patient consent is necessary, and the application works similarly to dictation, but on a much higher level. A physician must review the note later and finalize the documentation.

Copying and pasting previous visit information in medical record documentation represents the low-tech version of generative AI in medical documentation and can help save time, but it risks documenting old or obsolete information. In the case of systematic fraudulent charting behavior, copying and pasting large amounts of prior documentation also serves to populate sections of a progress note incorrectly to “check the boxes” on the multiple levels of documentation needed to bill a complex visit. Epic has new documentation features that summarize prior notes and require a provider to choose which elements apply to the current visit and discard others. Providers essentially “train” the AI to discern which elements belong. According to the developer, the idea is promising for reducing provider stress and time engaged in after-visit charting if the implementation includes significant auditing and monitoring data for tracking. The risk of systematic fraudulent coding and billing exists without such surveillance.

AI technology, specifically integrated into medical coding, is still in development. It seeks to alleviate workflow barriers for human coders by suggesting codes matching potential procedure and diagnosis codes. The machine learning part of the equation could effectively prevent fraudulent billing, although it can still be exploited incorrectly or fraudulently.

Accuracy and Legitimacy of Online AI/ML Information

New AI/ML applications, research, and data are readily available, though practitioners may find vetting the accuracy and reliability of these challenging. When researching an AI application in healthcare, recognize the difference between an article written and published by a reputable source, such as a medical journal or vetted news source, versus content sponsored by the vendor or pure AI-generated clickbait. The lines blur between these three categories, and there may be valuable and honest information in all three, but knowing the difference is very helpful. Reputable sources abound for keeping updated in this quickly changing and evolving space.

Medical Research

Medical research has historically lagged behind emerging AI technology. In recent years, more information has been found in the information technology (IT) sector than by providers and medical researchers. Clinicians and risk/compliance professionals may find it difficult to trust IT sources and to translate them to clinical practice. Peer-reviewed medical sources are increasingly providing curated content for medical professionals. The Journal of the American Medical Association (JAMA) and The New England Journal of Medicine (NEJM) have websites dedicated to curated research and peer-reviewed articles about AI in medicine.

Sponsored Content

The next category, sponsored content, can be beneficial information published by companies with AI products in healthcare. Especially with AI in healthcare, for-profit industry and private equity-backed ventures are the primary innovators. The quality of information varies depending on the company’s investment in technical written information versus pure marketing materials. Sponsored content can look like independent journalism or clinical research but should be marked “sponsored content,” even if in fine print. Search engines usually reserve the first few search results for sponsored content, so search results may be clearly marked as such before a user even clicks on them. Links in the article to companies or products also betray sponsored content. While sponsored content can include important product information, research, and links or references to peer-reviewed articles, the reader should be aware of the sponsorship.

AI-Generated Clickbait

Because AI in healthcare is such an evolving and popular topic, AI-generated content abounds on the internet, posing as legitimate “articles.” These are articles actually composed using AI-tools and not human authorship. One can still find nuggets of accurate or useful information, although it is usually gleaned from more legitimate information sources without references. Finding the source information may be possible by copying and pasting items of interest, including a full sentence from an AI-generated article, into a search engine. In parallel subjects, many people now use AI, such as Chat GPT, to search for their questions on the internet instead of using a traditional search engine. Asking an AI chat, for instance, about a particular medication or side effect can provide fast results from reputable sources, however, such tools need refinement before the results can be reliable.

New Developments in Academic Research on AI/ML in Healthcare Systems

Having been nearly absent for years, academic papers on AI/ML applications in use in healthcare systems are now produced seemingly every day. A recent editorial in The New England Journal of Medicine called for more randomized clinical trials of AI. In the October 2024 article “We Need More Randomized Clinical Trials of AI,” the authors discussed the first prospective clinical trial of AI in stress echocardiography, which found no difference between the diagnostic accuracy of AI assistance and a human standard-of-care assessment. The Journal chose to point out the “significant value in conducting prospective clinical trials of AI, and… lessons on implementation to be learned from this study.” A few other recently published results include:

  • “Perspectives on Artificial Intelligence – Generated Responses to Patient Messages”: This was the first ever study to assess clinician satisfaction with AI-generated responses to patient questions posted to their EHRs. In this cross-sectional study, six licensed clinicians evaluated the responses to 3,769,023 patient medical advice requests in EHRs that had been created using one of two generative AI tools. The clinicians then evaluated the AI-generated responses against the original clinician responses for information quality and empathy. Satisfaction was higher with the AI responses than with the clinicians’ responses. The highest information satisfaction came from AI-generated cardiology responses, while the quality and empathy were rated highest with the responses from endocrinology. The study found that the clinician-generated responses were shorter than AI responses, and the length of the response was also associated with satisfaction.
  • “Accelerated Chest Pain Treatment with Artificial Intelligence-Informed, Risk Driven Triage”: This multisite quality improvement study compares treatment intervals for adult patients with chest pain before and after implementing an AI-informed emergency department (ED) triage system. The results showed that the ED triage system did not change the median length of stay (LOS) for discharged patients but did reduce for hospitalized patients. The system reduced the patient’s adjusted median time to emergency cardiovascular procedures, but did not change 30-day mortality or 72-hour ED returns requiring hospitalization.
  • “AI-Based Anomaly Detection (AD) for Clinical-Grade Histopathological Diagnostics”: The first effective clinical application of AI-based AD in histopathology, this study found that without specific training for the diseases, the AI tool’s best performing model reliably detected a broad spectrum of infrequent pathologies with 95% accuracy for stomach and 91% for colon. Cancers were detected, with 97.7% for the stomach and 96.9% for the colon. The study authors concluded, “To our knowledge, no other published AI tool is capable of zero-shot pan-cancer detection. AD may enhance the safety of AI models in histopathology, thereby driving AI adoption and automation in routine diagnostics and beyond.”

This breakthrough clinical research on AI/ML in healthcare systems demonstrates how the industry is on the leading edge of AI applications and enforces the importance for compliance professionals and health lawyers to stay apprised of the newest applications and verify that devices are ethical, useful, and approved for use.

    Author