Emerging legal issues surrounding the use of artificial intelligence and machine learning in healthcare are numerous and complex. On one hand, the use of these new technologies offers many opportunities for significant advancements in medical research and the provision of healthcare services. On the other hand, these new technologies simultaneously create new potential pitfalls regarding liability for poor outcomes and the potential displacement of healthcare providers.
October 01, 2017
“Paging Dr. Bot” – The Emergence of AI and Machine Learning in Healthcare
Michael Woolf, Baker, Donelson, Bearman, Caldwell & Berkowitz, PC, Nashville, TN
Artificial intelligence (or AI) refers to programs that allow a machine to carry out tasks in a manner that humans would consider “intelligent” or “smart.”1 Machine learning is one such application of AI. Machine learning is designed to provide enough data about a particular subject for a machine to begin to make inferences based on a feeder (or initial) data set.2 Innovations in machine learning such as neural networks – systems able to organize information in a manner similar to humans – further learning by taking advantage of feedback loops, in which the output of one situation is used as new input for the next. Either through external direction or internal recognition and inference whether decisions are right or wrong (largely based on probability and predictive modeling), a computer can ‘learn’ to recognize images, patterns, statements, and other data. Eventually, the computer can make assertions, decisions, and extrapolations with a relatively high degree of accuracy. As new methods of interaction with these systems emerge – such as natural language processing as found in Siri, Alexa, and Cortana – AI inches ever closer to the computers of science fiction.
Technologists widely consider the majority of algorithms currently used in the healthcare space to be “expert systems.”3 They define expert systems as those that can apply general principles of medicine to new patients in a manner similar to a medical school student.4 These systems use rule sets to draw conclusions in a clinical scenario by using data on a given topic that the system applies to a particular set of facts. Uses might include cross-referencing drug interactions or making determinations about what testing might be appropriate based on statistical analysis.
By comparison, machine learning solutions approach problems in a manner more closely resembling a newly licensed medical resident as she learns new information and rules from data input over time.5 With “experience,” machine learning algorithms can look for combinations and patterns and reliably predict outcomes by analyzing large amounts of data. This process is similar to traditional regression models – where variables that matter are sorted from those that can be ignored.6 The sheer volume or complexity of creating massive sets of predictors and combining them in nonlinear, interactive ways would make analyzing such data unrealistic, perhaps even inconceivable, using pre-AI regression modeling.7
Whether the system is an expert system or a machine learning solution, the ability to parse tremendous amounts of data quickly and efficiently will serve to potentially create better healthcare outcomes.8 Just how these systems are utilized and the reliance that clinicians place on the systems over their own training and experience remains to be seen. However, there is little doubt that the emergence of these systems will have a lasting impact on the provision of care.
There are numerous uses for AI in healthcare. Some include disease identification and diagnosis, personalized treatment programs, pharmaceutical treatment and discovery, clinical trial research, radiology and radiotherapies, predictive modeling for epidemics and outbreaks, and electronic health records systems.9 However, with the adoption of more AI solutions, issues regarding the legality and effectiveness of these systems are open for debate.
Chief among these issues are: (1) concerns regarding civil and criminal liability for poor outcomes resulting from the use of AI to diagnose or treat an illness (or, conversely, potential claims of malpractice that could result from failure to use AI in the practice of healthcare); (2) biases emerging in AI and machine learning solutions; and (3) the likely displacement of physicians and providers by AI and machine learning solutions. How practitioners and the law address these issues in coming years will have monumental impact on the administration of healthcare in the next several decades.
Who Dunnit – Civil and Criminal Liability for Poor Outcomes
Conventional legal questions emerging early in the adoption of AI and machine learning include:
- Who may be held liable when a machine misdiagnoses or causes an injury to a patient? 10
- Can, and to what degree, may physicians delegate diagnostic tasks to AI systems without fear of exposure to increased liability?11
- Can and to what degree may practitioners increase their exposure to liability if they fail to use these new systems?
Damned if you do – Liability Issues
The legal effect of introducing AI into the provision of healthcare, the generating of diagnoses, or the creation of new treatments will vary widely in accordance with regulations governing the particular contexts, jurisdictions, and the rules that apply within them.12 To a certain degree, AI applications may fall within current non technology-specific policy, such as systems interacting with children or information privacy and data protection.
But most troubling for the near future is that AI will, by design, behave in ways that its designers do not expect. This will likely give rise to a legal conundrum stemming largely from the fact that tort law expects and compensates based on foreseeable harm. The very fact that AI is designed to discover new connections and create innovative solutions in ways that may have yet to even be considered means that there might simply be no possibility of foreseeability on the part of its designer.13
Two outcomes are likely to emerge due to this conundrum. Either courts might indiscriminately or arbitrarily assign liability to the designer for reasons of fairness or efficiency, disregarding foreseeability altogether, or, in the alternative, courts might dismiss cases because the designer did not (or could not) foresee the harm that the AI caused.14 Where liability would fall in the latter case is anyone’s guess. The user of the AI? Perhaps, the victim, himself, through some theory of assumption of the risk?
To further complicate matters, human practitioners are required to attain certification or licensure before performing certain tasks while (at the moment) AI is not. Indeed, law and policy are already wrestling with such issues (particularly as they pertain to autonomous vehicles).15 It is unclear just who or what would be required to pass medical boards, before even reaching the issue of jurisdiction for such licensure.16
Damned if you Don’t – Risking Malpractice for Failure to Use AI Solutions
Ironically, amidst concerns about liability for the use of AI, it is equally conceivable that soon failure to use AI in diagnosis and therapy will constitute malpractice.17 As it stands, the number of deaths due to medical error is equivalent to crashing a 747 airliner every week.18 AI can and will help to decrease that number by reducing human error caused by a lack of information or experience. This is because much of the work of a physician is to arrange the picture of a disease based on a collection of symptomatic puzzle pieces.
The ability of AI to sort through vast amounts of information, while remembering and crosschecking everything it has ever learned, could enable a digital (and likely more affable) version of TV’s exceptional diagnostician, Dr. Gregory House.19 House is a terribly flawed, curmudgeonly, cranky misanthrope lacking both bedside manner and often even professional courtesy. Yet he happens to be a diagnostic Sherlock Holmes-ian character who, as his best friend Wilson puts it, has a “Rubik’s complex” rather than the more common “Messiah complex” (that is House is “all about the puzzle”).20 Like House, AI can parse multiple possible diagnoses as well as model and explore various issues, all the while determining why certain factors suggest ruling out a particular condition, and reach into its extensive memory to ultimately determine what the correct cause may be.21 Unlike House, AI won’t be required to berate a room full of residents in order to do so.
Simply put, AI-aided diagnostic systems are able to weigh more factors than a physician could on her own. For an AI system, a prognostic model is never restricted by the handful of variables that a human can consider.22 Building a model using thousands of variables instead might soon mean that the liability assumed by a practitioner failing to use these tools might constitute malpractice. Today, the failure to diagnose or a delay in diagnosis is the most common reason given by patients suing their doctors.23 Just imagine then, as the use of AI technology becomes ubiquitous, the case of the physician failing to reach a proper diagnosis or to prescribe the correct treatment when the answer was simply a few clicks away.
Garbage In, Garbage Out – Inherited Biases in AI
Real concerns exist about the compounding effects of current problems, especially given new, significant possibilities for synthesis and analysis as AI parses increasingly large data sets.24 These problems may stem from relatively simple problems such as inaccurate or incomplete medical records. Unfortunately, they may produce unintended consequences from more distasteful dynamics such as racism, sexism, or other forms of potential discriminatory or biased outcomes.25 Risk of bias can potentially even emerge from results calculated without the benefit of the context of certain cultural norms.26 Unfortunately, bias can be difficult to detect and thus may unintentionally find its way into the logic systems of machine learning products.
For example, recent studies of AI in the legal context found that even when AI was used to attempt to counter bias in sentencing, people of color were still routinely given heavier sentences than white people.27 Researchers determined that the bias was created by the machine learning algorithms using historical information as a key part of the data set from which it based its decisions – decisions that were made with human bias.28 The result was a machine that maintained the prejudices that fed it.29 Bottom line: if the data is biased, resulting decisions by AI will be as well or as the coder’s mantra goes, “garbage in, garbage out.”
The good news is that bias is an area in which a well-designed system may have a significant impact in correcting the problem. Even relatively unsophisticated AI can be programmed to track either its own or historic decisions and begin to recognize patterns to identify disproportionately weighted results within the decision-making process. Thus, a machine that finds itself making decisions adversely affecting a certain demographic, for instance, is also inherently smart enough to be able to recognize the deviation, stop, review its decisions, explain itself, and show the factors behind the outlier decisions.30
Out with the Old, In with the New – Displacing the Human Practitioner
Two assumptions – based on the notion that certain capabilities are indispensable in the delivery of professional services – are often the basis that physicians are immune to displacement by technology.31 The first is that computers are incapable of exercising judgment; the second, that computers are unable to be creative or empathetic.32
Research demonstrates, however, that when professional work is broken down into component parts, many required tasks are routine and process-based and do not require any call for judgment, creativity, or empathy.33 Additionally, those who argue that human creativity or empathy is required to do professional jobs simply ignore the fact that AI already outperforms professionals by combining processing power with data sets that humans are incapable of processing with similar speed or precision.34
Early indicators demonstrate that a radical shift in the foundation of professional services is already evident. For example, the WebMD® health websites already see more patient visits per month than do all of the doctors in the United States.35 Healthcare organizations are rapidly moving away from individualized solutions for each patient and instead toward the standardization of care.36 Doctors increasingly use technology, including AI-based problem solving and real-time checklists, to automate and transform the delivery of care.37 Once this knowledge and expertise is fully structured, it will be available online. 38 In the spirit of the open source movement, some of these systems are already emerging at no cost to patients.39
It is worth bearing in mind, however, that AI systems are rarely designed to replicate human reasoning and thinking. Instead, they are designed to take different approaches to achieve faster, better, and cheaper outcomes.40 Despite a lack of creativity or human experience, AI computers have been repeatedly shown to best what humans can offer via the strength of past data, whether that means beating the best human competitors at games like Chess and Go, more accurately predicting likely court decisions, or more correctly modeling probable outcomes of epidemics.41
Updating the Physician
Designers create machine learning systems to synthesize new information at speeds much greater than that of a human. The average physician reads only three to four hours of medical journals each month, yet all too often physicians are unable to even integrate that information into their practice.42 Practically speaking, machine learning and AI are at a significant advantage as they can synthesize new data, bring the information into the diagnostics process, and integrate it into its decision making almost immediately.43 In fact, recent studies found that in some cases, AI was better at diagnosing rare diseases than physicians.44 Ostensibly, this is because the physician may not come across material or, if she has, she may not have ready information regarding a disease she has not often encountered.
Additionally, many common healthcare issues can be triaged with the use of relatively simple decision trees that complement simple, early AI. 45 With a relatively small amount of input from a patient, AI can gather a list of symptoms, cross check the symptoms against the patient record (including verifying any current medications or treatment regimens), and make determinations as to which symptoms are likely to require further examination.
When physicians and AI work together, however, the best outcomes are found.46 For example, an AI system can catalog a patient’s symptoms, check against a genome, analyze or set lab parameters, and recommend to the physician a subset of likely diagnoses to check.47 The physician can then integrate the AI’s output into her own workflow, using her own expertise and knowledge of the patient to make final decisions. Just such a system is in use, for example, at UCLA where radiologists use a machine-learning system that was fed several thousand data points to simulate common inquiries often received during a consultation.48 The radiologists can interact with the system through a text-based chat system, similar to instant messaging a colleague.49
Real-time Monitoring and Screening
In addition, as the internet of things (IOT) grows, AI can monitor patient information in real-time and present a doctor with patients across her practice who may be entering a condition of increased risk. Even if a practitioner were able to make sense of the raw data for any particular patient, she likely cannot do it simultaneously for all of her patients. Instead, the same doctor can instruct AI to find at-risk patients based on continuous monitoring of blood pressure, sleep data, and respiratory rate changes picked up by an IOT-connected device.50
This kind of AI is already in use monitoring interactions between medications and alerting physicians regarding medical imaging.51 Soon expensive and intrusive testing requiring specialized equipment and facilities, such as sleep studies or stress tests, will use a patient’s own IOT mattress or wearable.
Move over Dr. Bob, here comes Dr. Bot
Beyond enhancing a physician’s workflow, research and analysis suggest that many medical practitioners, like many other professionals, will have their jobs supplanted by emerging technology. It is anticipated that most traditional professions (including medicine) will be disrupted very soon and “less-expert persons, new types of experts, and high-performing systems,” will replace many professionals.52
Some of the first to feel this shift will include radiologists, dermatologists, and anatomical pathologists, as these disciplines rely heavily on pattern recognition. These professionals will quickly be replaced by rapidly improving machine learning systems that can both process large data sets and more accurately process images at rates that exceed human ability and accuracy.53 Those closely following AI’s evolution forecast the disruption in these specific fields, not in decades, but in years.54
Practitioners may be able to rest a bit easier knowing that they have one clear advantage: human patients. Recent polling indicates that more than two-thirds of Americans are uncomfortable (more than a third are “very uncomfortable”) with AI performing medical diagnoses.55 Almost three-quarters are uncomfortable with the idea of AI-programmed robots performing surgery (more than half are “very uncomfortable”).56
But, to lend some perspective, nearly three-quarters are also uncomfortable (more than half “very uncomfortable”) with the idea of AI flying an airplane57 even though mechanically controlled flight systems have been in wide use for more than 100 years.58 In the end, the need for human contact may emerge as the deciding factor in the battle between artificial clinicians and human caregivers.
You’re Already Infected – the Future is Now
Questions regarding the legal implications of AI-assisted healthcare delivery are being posed much too late. AI and machine learning solutions are currently in wide use in the healthcare space. The question today is not what to do when AI emerges, but what to do now that it has emerged. Suggesting that AI and machine learning will make an impact or disrupt healthcare is an admission that one is not paying attention. AI and machine learning systems are being widely used and their adoption is increasingly accepted and welcomed.59
Healthcare organizations and those that support the healthcare eco-system must recognize that AI and machine learning will disrupt the provision of care.60 What matters moving forward is how intentional the law and the healthcare industry are about the coming changes. Guidelines and laws must be created to address the legal implications of AI in the provision of healthcare.
Some specific regulatory tools that scholars suggest may be useful in regulating AI might include temporary regulation with “sunset clauses” that define adaptable goals and enable adjustment to laws and regulations as circumstances evolve; the creation of regulatory “sandboxes” that allow innovation without the shackles of strict regulation; the creation of techniques for anticipatory rulemaking that adaptation to contingencies that may occur; the iterative development of common law to adapt rules to new contexts; or the development of specialist regulatory agencies.61 It is the responsibility of forward-thinking lawyers, doctors, and technologists to work together to proactively address these issues. The alternative is most likely a growing number of lawsuits that will create an ad hoc set of rules and bench law – or worse – decisions that kick the can down the road with no direction at all.
- Bernard Marr, “What Is The Difference Between Artificial Intelligence And Machine Learning?” Forbes.com (Dec. 6, 2016) https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#70f1e38b2742.
- Id.
- Ziad Obermeyer, MD and Ezekiel J. Emanuel, MD, PhD, “Predicting the Future – Big Data, Machine Learning, and Clinical Medicine,” N. Engl. J. Med. (Sep. 29, 2016) (available at http://catalyst.nejm.org/big-data-machine-learning-clinical-medicine/).
- Id.
- Id.
- For more, see Amy Gallo, “A Refresher on Regression Analysis,” Harvard Business Review (Nov. 4, 2015), https://hbr.org/2015/11/a-refresher-on-regression-analysis.
- Obermeyer, n.3, supra.
- See, for example, “World first for robot eye operation,” Univ. of Oxford (Sep. 12, 2016) http://www.ox.ac.uk/news/2016-09-12-world-first-robot-eye-operation/; “The Robot Will See You Now – AI and Health Care: The Robot Will See You Now – AI and Health Care,” Wired (Apr. 20, 2017), https://www.wired.com/video/2017/04/robots-us-the-ai-and-automation-revolution-the-robot-will-see-you-now-ai-and-health-care/; Matthew Hutson, “Self-taught artificial intelligence beats doctors at predicting heart attacks,” Science Magazine (Apr. 14, 2017) http://www.sciencemag.org/news/2017/04/self-taught-artificial-intelligence-beats-doctors-predicting-heart-attacks/.
- See generally Daniel Faggella, “7 Applications of Machine Learning in Pharma and Medicine,” TechEmergence.com (Mar. 22, 2017), https://www.techemergence.com/applications-machine-learning-in-pharma-medicine/;
- Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology, Vol. 29 p 353-400, No. 2 Spring 2016.
- Id.
- Peter Stone, et al., "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University (Sep. 2016), http://ai100.stanford.edu/2016-report.
- The argument made by many who fear what AI will bring is whether a superintelligent system will find it no longer needs humans. Or as the “nightmare scenario” asks, “Why should a superintelligence keep us around?” A growing number of researchers take the possibility that computers will reach levels of human intelligence within 50 years seriously. The researchers suggest that if these computers eventually become more intelligent than we are there are significant implications for humankind. Stephen Hawking said that when AI does surpass human intelligence “it’s likely to be either the best or worst thing ever to happen to humanity.” See, generally, Arend Hintze, “What an Artificial Intelligence Researcher Fears about AI,” Scientific American (Jul. 14, 2017) https://www.scientificamerican.com/article/what-an-artificial-intelligence-researcher-fears-about-ai/; see also “About, Preparing for the Age of Intelligent Machines,” Leverhulme Center for the Future of Intelligence, http://lcfi.ac.uk/about/.
- Stone, n. 12, supra.
- Id.
- Id.; see also Allain, fn. 22 at 1062 (citing several cases which suggest that AI systems used by physicians could be analogous to the same physician consulting with another physician to obtain an informal opinion).
- “AI and the Digital Healthcare Revolution,” CXO Talk (Jan. 19, 2017) (available at https://www.cxotalk.com/episode/ai-digital-healthcare-revolution).
- Id.
- Sean Captain, “Paging Dr. Robot: The Coming AI Health Care Boom,” Fast Company (Jan. 8, 2016) https://www.fastcompany.com/3055256/paging-dr-robot-the-coming-ai-health-care-boom; see also Allain, fn. 24 at 1049 (“In the not so distant future, medical mysteries will crumble in the face of a more efficient, less dysfunctional Dr. House”).
- Burnett, Barbara, "Dr. Gregory House: Romantic Hero," Blogcritics Magazine (Oct. 30, 2007) available at https://web.archive.org/web/20080213083700/http://blogcritics.org/archives/2007/10/30/114407.php.
- Captain, n.19, supra.
- Obermeyer, n.3, supra.
- Rachael Rettner, “Failure to Diagnose Is No. 1 Reason for Suing Doctors,” LiveScience.com (Jul. 18, 2013 6:30pm) https://www.livescience.com/38289-malpractice-claims-missed-diagnoses.html. See also Gordon D. Schiff, Seijeoung Kim, Richard Abrams, et al., “Diagnosing Diagnosis Errors: Lessons from
- a Multi-Institutional Collaborative Project,” Advances in patient safety: Vol. 2, 255, 256, Agency for Healthcare Research and Quality, U.S. Dept. of Health & Human Svcs. (2004) (available at https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/patient-safety-resources/resources/advances-in-patient-safety/vol2/Schiff.pdf) (“[T]wo recent studies of malpractice claims revealed that diagnosis errors far outnumber medication errors as a cause of claims lodged (26 percent versus 12
- percent in one study; 32 percent versus eight percent in another study). A Harris poll commissioned by the National Patient Safety Foundation found that one in six people had personally experienced a medical error related to misdiagnosis.”).
- Dr Michael Guihot, Anne Matthew and Dr Nicolas Suzor, “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence,” We Robot Conference 2017 at 11 (available at http://www.werobot2017.com/wp-content/uploads/2017/03/Guihot-et-al-Nudging-Robots.pdf).
- Id.
- Id.
- “Artificial Intelligence: Legal, Ethical, And Policy Challenges,” CXO Talk (Nov. 10, 2016) (available at https://www.cxotalk.com/episode/ai-legal-ethical-policy-challenges).
- Id.
- Id.
- Id.
- Richard Susskind and Daniel Susskind, “Technology Will Replace Many Doctors, Lawyers, and Other Professionals,” Harvard Business Review (Oct. 11, 2016) https://hbr.org/2016/10/robots-will-replace-doctors-lawyers-and-other-professionals.
- Id.
- Id.
- Id.
- Id.
- Id.
- Id.
- Id.
- Id.
- Id.
- Id.
- “AI and the Digital Healthcare Revolution,” CXO Talk (Jan. 19, 2017) (available at https://www.cxotalk.com/episode/ai-digital-healthcare-revolution).
- Id.
- Id.
- Id.
- Id.
- Id.
- Society of Interventional Radiology, “Artificial intelligence virtual consultant helps deliver better patient care,” ScienceDaily (Mar. 8, 2017) http://www.sciencedaily.com/releases/2017/03/170308114842.htm.; see also Marla Durben Hirsch, “UCLA Uses Chatbots As Radiology Consultants,” Hosp. & Health Networks (Mar. 30, 2017) http://www.hhnmag.com/articles/8186-ucla-uses-chatbots-as-radiology-consultants.
- “AI and the Digital Healthcare Revolution,” n. 41, supra.
- Id.
- Id.
- Susskind, n.31, supra.
- Obermeyer, n.3, supra.
- Id.
- Morning Consult, National Tracking Poll #170401 March-April 2017, at 61 (available at https://morningconsult.com/wp-content/uploads/2017/04/170401_crosstabs_Brands_v3_AG.pdf).
- Id.
- Id.
- See U.S. Patent No. 1,368,226 (filed July 17, 1914); Scheck, William. "Lawrence Sperry: Autopilot Inventor and Aviation Innovator," Aviation History Magazine Online (Jan 1, 2006) http://www.historynet.com/lawrence-sperry-autopilot-inventor-and-aviation-innovator.htm (describing the debut of the Sperry automatic pilot on July 18, 1914 when Sperry and his mechanic climbed onto the wings of a biplane, midflight, to demonstrate the new invention); “Now – The Automatic Pilot,” Pop. Sci. Monthly, Feb. 1930, at 22 (introducing the Sperry Corporation’s first aircraft autopilot system); Sir John Charnley CB, MEng., FREng., FRIN, FRAeS, “The RAE Contribution to All-Weather Landing,” Journal of Aeronautical History Volume 1, Paper No. 2011/ 1 at 13 (detailing the RAE Blind Landing Experimental Unit’s all-weather landing systems, introduced commercially as early as 1961).
- See generally DeepMind, “Working with the NHS to build life-saving technology DeepMind, https://deepmind.com (using Google DeepMind artificial intelligence to improve diagnostics and treatment in healthcare; see also DeepMind, “Streams in NHS hospitals,” https://deepmind.com/applied/deepmind-health/working-nhs/how-were-helping-today/ (the UK’s National Health Service using AI to help clinicians identify and treat acute kidney injury which is linked to more than 40,000 deaths annually at a cost of over £1 billion – greater than the annual cost of breast cancer treatment in the UK); see also Sean Captain, “Paging Dr. Robot: The Coming AI Health Care Boom,” Fast Company (Jan. 8, 2016) https://www.fastcompany.com/3055256/paging-dr-robot-the-coming-ai-health-care-boom (IBM’s Watson Health AI is currently in trials at 16 cancer clinics including The Cleveland Clinic, Columbia University, the University of Kansas Cancer Center, and Yale Cancer Center, which aids practitioners with a decision tree to address possible diagnoses, recommended tests to explore, and possible treatment regimens, as well as studies, articles, and clinical trials for each possibility.)
- “Artificial Intelligence: Legal, Ethical, And Policy Challenges,” n.27, supra.
- Guihot, n.24, supra.