chevron-down Created with Sketch Beta.
October 02, 2018 Feature

Realizing the Potential for AI in Precision Health

By Tom Lawry, Steve Mutkoski, and Nathan Leong

Artificial intelligence (AI)—intel- ligent technology capable of analyzing, learning, and drawing predictive insights from data—is transforming many sectors of the global economy; whether AI also will transform health care is no longer a matter for significant debate.1 Immensely powerful AI capabilities have driven major advances in “precision health” technologies—personalized and predictive health solutions that help to both prevent and treat disease and also promote wellness—by enabling the analysis of and extraction of insight from increasingly large amounts of health data. But with the advent of these solutions, the question has become: How do we respond to the ethical and legal challenges they can create?

The Promise of Precision Health Today

Precision medicine technologies, which consider an individual’s genetic makeup and potentially other unique factors, such as lifestyle and environment, have been used in medical practice to determine the appropriate medical treatment for some time.2 But the real public health opportunity lies in precision health. Precision health involves not just identifying the right treatment for a particular patient—but also individualizing all aspects of health care for that patient, including disease risk and prognosis prediction, leading to better disease detection and prevention.3 Rather than the right treatment at the right time for the right patient, precision health looks to ensure the right medical decision for each individual patient.

AI-driven precision health technologies hold great promise, including in improving quality of care and patient outcomes. While we might be years or decades away from realizing the full potential of precision health, early efforts are already solving important challenges today and demonstrate the critical role of AI technology and advances in computing capabilities.

For example, Microsoft partnered with St. Jude Children’s Research Hospital and DNANexus to develop a genomics platform that provides a database to enable researchers to identify how genomes differ. Researchers can inspect the data by disease, publication, and gene mutation and also upload and test their own data using the bioinformatics tools. Because the data and analysis run in the cloud, powered by rapid computing capabilities that don’t require downloading, researchers can progress their projects much faster and more cost-efficiently.

In another example, Adaptive Technologies partnered with Microsoft to build AI technology to map and decode the immune system, similar to the way the human genome has been decoded, to reveal what diseases the body currently is fighting or has ever fought and enable earlier and more accurate diagnosis of disease and a better understanding of overall human health.4

Microsoft also is collaborating with researchers to use machine learning and natural language processing to convert text (e.g., journal publications) into structured databases to help identify the most effective, individualized cancer treatment for patients. Without this type of technology, it takes hours for a molecular tumor board of many specialists to review one patient’s genomics and other data to make treatment decisions.5

While there is great promise in AI-driven precision health systems, there also are many challenges to the successful development and adoption of these systems, including technological and economic challenges. For example, as medicine becomes more personalized, the market for any particular therapeutic product becomes smaller, potentially leading to higher-priced diagnostic tests and “personalized” medicines—potentially disadvantaging economically less well-off patients.6 Likewise, payers may be reluctant to pay for early risk prediction technologies when it is not clear which of those tests actually improve outcomes or reduce costs in the long term.7 These issues and others raise particularly thorny legal and ethical challenges to use of AI for precision health.

Principles for Responsible Development

Society only will achieve the public health promise of AI-driven precision health if these systems are developed and deployed responsibly. In the book The Future Computed, with a forward co-authored by Brad Smith and Harry Shum, Microsoft has proposed a series of principles to guide the responsible creation and deployment of AI and the development of best practices and regulatory frameworks for the use of AI that merit consideration in the precision health context.8

Many of the guiding principles identified by Smith and Shum are not new to the health care sector, and reliability and safety, “representativeness,” and lack of bias, as well as privacy, have long been principles of medical ethics. Therefore, in some ways, the medical community may be better prepared to implement these principles for the use of AI than other sectors. But at this point in time, the medical community has by no means solved the challenges addressed by these principles even for traditional medical technologies, and the rise of AI-driven precision health systems raises new and different dimensions to these challenges.

Reliability and Safety

Precision health technologies must perform accurately, reliably, and safely. While medical technologies long have been required to demonstrate safety and effectiveness pursuant to regulation by regulators such as the U.S. Food and Drug Administration (FDA),9 AI-driven systems challenge the traditional models for demonstrating reliability.

One particular challenge for precision health technologies is ensuring that the output is not just technically reliable (that the correlation or rule that an AI system learns accurately reflects the data) but also clinically reliable. AI systems are able to compute large numbers of datasets to identify correlations, but that doesn’t mean that every correlation or rule that an AI system learns is clinically relevant, that the system has properly characterized cause and effect, and/or that system developers have identified the correct variables.10 AI systems cannot exercise clinical reasoning and judgment to determine whether the rules learned by a system are clinically meaningful. Instead, this kind of judgment must be exercised by health care professionals. Thus, particularly as applied in precision health, AI may be better thought of as “augmented intelligence” rather than “artificial intelligence”—amplifying human intelligence rather than replacing it.11

Here is an example of the need to critically evaluate the decisions learned by AI systems. Researchers designed an AI system to help triage which patients were low or high risk for pneumonia. Using this system, low-risk patients could be sent home with antibiotics while high-risk patients would be admitted to the hospital. The system had identified a correlation showing that asthma sufferers were less likely to die from pneumonia than the general population and therefore recommended against treating asthma sufferers as high risk. The correlation was real, but the conclusion that asthma sufferers were low risk was a misinterpretation and clinically contradicted. Because the medical community generally considered asthma patients to be at greater risk of dying from pneumonia, asthma patients received faster and more comprehensive care than other patients—leading to less severe outcomes in the asthmatic population as was represented in the system’s training dataset. But the system wrongly assumed that all patient outcomes were related to the patient’s underlying “risk” and could not differentiate outcomes based on varying levels of care a patient received.12

“Self-improving” or “continuous learning” AI is one example of promising AI-driven precision health technology that puts in sharp relief the challenge of ensuring reliability and safety within the existing regulatory models. As many have observed, our current regulatory regime assumes that “any product may be clinically tested, produced, marketed, and used in a defined, unchanging form.”13 That is clearly in some tension with the concept of a tool that will continually learn by analyzing new data—which supposes constant resubmission for regulatory review, whether that learning is supervised by a human or not.14

The dynamic nature of continuous learning AI means we will need to develop new ways to ensure the safety and reliability of such systems. We will need to develop a regulatory regime that ensures that changes the continuous learning system makes to itself, ostensibly improvements, do not instead introduce errors into the model that could injure subsequent patients. But at the same time that new regime must be more nimble, so as to not require nearly constant revalidation of the device or its model. A regime to ensure the reliability of AI systems should involve the following: systematic evaluation of the quality and suitability of the data and models used to train AI-driven systems; adequate explanation of the system operation including disclosure of potential limitations or inadequacies in the training data; medical specialist involvement in the design and operation process; evaluation of the role of medical professional input and control in the deployment of the systems; and a robust feedback mechanism from users to developers.

Fairness, Inclusiveness, and Bias

Precision health systems should treat everyone in a fair and balanced manner. In theory, AI-driven systems make objective decisions and do not have the same subjective biases that influence human decision making. But, in practice, AI systems are subject to many of the same biases.

Because AI-driven systems are trained using data that reflect our imperfect world, without proper awareness and control those systems can amplify biases and unfairness that already exist within datasets—or can “learn” biases through their processing. “Under-representation” in datasets may hide population differences in disease risk or treatment efficacy. For example, researchers recently found that cardiomyopathy genetic tests were better able to identify pathogenic variants in white patients than patients of other ethnicities, the latter of which had higher rates of inconclusive results or variants of uncertain significance.15 Even data that are representative can still include bias because they reflect the discrepancies and biases of our society, such as racial, geographic, or economic disparities in access to health care.

Nonrepresentative collection of data also can produce bias. For example, reliance on data collected through user-facing apps and wearables may skew toward socioeconomically advantaged populations with greater access to connected devices and cloud services. Similarly, genetic testing remains cost-prohibitive for many consumers, so AI systems that leverage such genetic datasets may be skewed toward more economically advantaged consumers. And data obtained from electronic health records (EHRs) will reflect disparities in the patient populations treated by health systems implementing EHR systems; the uninsured or underinsured and those without consistent access to quality health care (such as some patients in rural areas) often will be underrepresented in EHR datasets.16 EHR data themselves may introduce bias because they were collected for clinical, administrative, and financial purposes (patient care and billing) rather than for research and, therefore, may be missing critical clinical contextual information.17

We also must ensure fairness in application. Precision health technologies that are designed to predict health outcomes to improve quality of care could be unfairly implemented to make decisions about who receives care in an effort to reduce costs. For example, researchers developed a machine learning system to help predict six – to twelve-month mortality risks.18 The system was designed to improve upon current prognostic efforts by physicians when making determinations about whether a patient is eligible for hospice care and to improve end-of-life care. But such a system could be unfair if deployed to withhold treatment from patients with a higher mortality risk.

AI systems also may reflect the biases of those developing the systems and the clinicians implementing and interpreting the AI systems. This makes it particularly important that precision health technologies are developed by diverse groups of individuals and teams, and that they include appropriate medical experts. In addition, the health care professionals that implement precision health technologies in their practice must continue to exercise their own professional judgment in making patient care and treatment decisions. Finally, data scientists and AI developers must continue to develop analytical techniques to detect and address unfairness in AI-driven technologies.

Transparency and Accountability

Underlying the principles of reliability, fairness, and security are two fundamental principles: transparency and accountability. Because decisions made by precision health systems will impact patients’ health and care, it is particularly important that everyone relying on these systems (health care professionals, patients, managed care organizations, regulators) understand how the systems make decisions. Equally important, as precision health systems play a greater role in both diagnosis and selection of treatment options by health care professionals, we will need to work through existing rules around accountability, including liability.

As a threshold matter, these systems should provide “holistic” explanations that include contextual information about how the system works and interacts with data to enable the medical community to identify and raise awareness of potential bias, errors, and other unintended outcomes. Precision health systems may create unfairness if health care professionals do not understand the limitations (including accuracy) of a system or misunderstand the role of the precision health system’s output.

Even if it is difficult for users to understand all the nuances of how a particular algorithm functions, health care professionals must be able to understand the clinical basis for recommendations generated by AI systems. As discussed above, even where the results of AI systems may be technically reliable, they may not always be clinically relevant to a particular patient, and health care professionals will need to continue to exercise their judgment between the two. Transparency is not just how the AI system explains its results, but also teaching health care providers and users how to interrogate the results—ensuring doctors and others relying on precision health systems understand the limitations of the systems and do not put undue reliance on them. Recent court cases involving use of algorithms by state officials to assess and revise benefits for citizens with developmental and intellectual disabilities under a state Medicaid program provide a glimpse of how accountability issues will arise and be adjudicated. In these cases, courts required the states to provide patients with information about how the algorithms were created so that patients could challenge their individual benefit allocations.19

Beyond transparency, developers of AI-driven precision health systems should have some degree of accountability for how the systems operate, and those that deploy the systems in medical practice should exercise appropriate judgment when integrating them into medical decision making. At this point, there remain more questions about how accountability should be addressed than there are answers. For example, does it make sense simply to extend existing tort liability regimes that are designed to address injuries arising from defective products or negligent medical practice to also include injuries arising from the deployment and use of precision health technologies? And how should the balance of responsibility for use of suggestions provided by AI-driven precision health systems fall between system developers, health care institutions implementing the systems, and health care professionals utilizing the systems in clinical decision making? Are health care institutions required to independently evaluate each system, and if so, how?

Privacy and Security

AI-driven precision health systems should be secure and respect privacy. These systems will require unprecedented access to sensitive personal health data by technology developers and others, including researchers and clinicians. Protection and security of that data are critical to ensuring patients are willing to share their data and permit their use in innovation.

Privacy of health data, in particular, is already the subject of data protection laws. For example, in the United States, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) governs the use and disclosure of personally identifiable health information by health care providers and health plans and their business associates. In addition, many states have implemented genetic privacy laws that govern consent for genetic testing and disclosure of genetic testing results. But these existing frameworks may not adequately address the myriad privacy concerns that come with the explosion of health data and the many ways in which health data are collected, stored, shared, and used in the development of precision health systems.

Equally important is ensuring patients understand both when those protections are in place and the full ramifications of voluntarily sharing data, whether among various formal research projects or through more “open source” medical, genetic, or genealogical projects such as GEDmatch (a genetic database to which individuals voluntarily contribute their genetic data). News that law enforcement authorities identified and captured the “Golden State Killer” using GEDmatch has shed a bright light on the fact that genetic information is highly identifying. In the weeks since the announcement that such data had been used to identify the Golden State Killer, law enforcement officials around the country have begun using these data to find the perpetrators of many additional unsolved crimes.20 Not only does a person’s DNA identify him or her, but it facilitates identification of even distantly related individuals, which one commentator said, “read like science fiction, whether you find them hopeful or horrifying.”21 Concerns regarding confidentiality of genetic databases are not new. In December 2016, Congress required federally funded research involving genetic data to issue “Certificates of Confidentiality” protecting the privacy of research subjects, including from disclosure in court proceedings.22 However, it remains to be seen whether these Certificates of Confidentiality, or other legislation that might be formulated, will fully shield research data from disclosure or use by law enforcement.

We also must balance privacy protection with facilitating access to the data that AI-driven precision health systems require to operate effectively. This requires the development and implementation of security methods, such as differential privacy, homomorphic encryption, and techniques to separate data from identifying information about individuals (the latter of which is a particular challenge for genetic information). Developers of AI systems must continue to invest in the development of privacy protection and data security technologies that can be deployed with those systems.

Preparing the Health Care Profession

The development of AI-driven precision health technologies has given rise to questions about whether these systems will replace doctors. Such fears are likely unfounded. Most, if not all, countries are experiencing severe clinician shortages. And these shortages are only predicted to get worse in the U.S. over the next ten years. For example, a report for the Association of American Medical Colleges predicts a shortage of physicians through 2030 under every combination of scenarios modeled.23 Rather than being a threat to clinicians, AI-infused precision health tools might well be essential to improving the efficiency of care, thereby mitigating some of the issues resulting from future shortages of trained and experienced clinicians.

The promise of precision health systems to improve care likely will come not from replacing clinicians, but rather in automating repetitive tasks, thereby freeing clinicians time to focus on high-value activities in the patient care and treatment process. In this regard, properly designed systems will focus on augmenting the skills and experience of highly trained clinicians in keeping with the natural workflows of clinical delivery processes.

Conclusion

AI-driven precision health technologies hold great promise. We already see applications today that advance our understanding of health and disease, improve patient care and the public health, and reduce health care costs. But these systems must be carefully developed and deployed to ensure they are reliable, fair, transparent, private, and secure. Systems that are not could do more harm than good. Equally critical for precision health is the continued need for clinicians to exercise medical judgment, both in the development and in the application of AI-driven systems that augment health care practice. Only when we realize the importance of, and work together on, all these critical issues will we all truly realize precision health’s potential. 

Endnotes

1. See, e.g., McKinsey Global Inst., Artificial Intelligence: The Next Digital Frontier 58–64 (June 2017).

2. The 1998 approval of Herceptin to treat HER2-positive metastatic breast cancer with a companion diagnostic to identify HER-2 positive breast cancer patients often is cited as one of the earliest success stories of personalized medicine. See, e.g., Amalia M. Issa, Personalized Medicine and the Practice of Medicine in the 21st Century, 10 McGill J. Med. 53 (2007).

3. See, e.g., Ken Redekop and Deirdre Mladsi, The Faces of Personalized Medicine: A Framework for Understanding Its Meaning and Scope, Value in Health, 2013; 16: S4-S9; Sanjiv Sam Gambhir, T. Jessie Ge, et al, Toward Achieving Precision Health, Sci. Translational Med., 2018; 10(430).

4. Peter Lee, Microsoft and Adaptive Biotechnologies Announce Partnership Using AI to Decode Immune System; Diagnose, Treat Disease, Microsoft (Jan. 4, 2018), https://blogs.microsoft.com/blog/2018/01/04/microsoft-adaptive-biotechnologies-announce-partnership-using-ai-decode-immune-system-diagnose-treat-disease/.

5. AI for Precision Medicine, Project Hanover, http://hanover.azurewebsites.net/.

6. A.D. Stern, B.M. Alexander & A. Chandra, How Economics Can Shape Precision Medicines, 355 Sci. 1131 (2017).

7. Jerel Davis, Philip Ma & Saumya Sutaria, The Microeconomics of Personalized Medicine, McKinsey & Company (Feb. 2010), https://www.mckinsey.com/industries/pharmaceuticals-and-medical-products/our-insights/the-microeconomics-of-personalized-medicine.

8. Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (2018), https://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf. Brad Smith is Microsoft’s president and chief legal officer, and Harry Shum is Executive Vice President of Microsoft AI and Research.

9. To date, FDA has authorized marketing of only a few AI-based medical devices and has done so through its traditional approach and without AI-specific guidelines. FDA cleared the first AI-driven medical device in 2017. Letter from Robert Ochs, Dep’t of Health & Human Servs., to Arterys Inc., K163253 510(k) Summary (Jan. 5, 2017), https://www.accessdata.fda.gov/cdrh_docs/pdf16/K163253.pdf. That device applies deep learning to provide “editable, automated contours” of cardiac ventricles from multiple MRI scans of the heart to calculate blood volume. FDA also classified two AI-based devices earlier this year, including triage software that uses an AI algorithm to analyze CT images and identify a potential stroke and software that uses an AI algorithm to analyze retinal images to detect more than mild diabetic retinopathy during routine eye exams. Letter from Angela C. Krueger, U.S. Food & Drug Admin., to IDx, LLC, DEN180001 Classification Order (Apr. 11, 2018), https://www.accessdata.fda.gov/cdrh_docs/pdf18/DEN180001.pdf; Letter from Angela C. Krueger, U.S. Food & Drug Admin., to IDx, LLC, DEN170073 Classification Order (Feb. 13, 2018), https://www.accessdata.fda.gov/cdrh_docs/pdf18/DEN180001.pdf. But FDA has recognized that it lacks strong internal expertise in evaluating AI-based medical technologies and is working with experts to determine how AI-based technologies can be validated and demonstrated to be reliable. Mike Miliard, FDA Chief Sees Big Things for AI in Healthcare, Healthcare IT News (Apr 30, 2018), http://www.healthcareitnews.com/news/fda-chief-sees-big-things-ai-healthcare. Additionally, not all precision health applications of AI require FDA review.

10. Rich Caruana et al., Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission, 21 Proceedings of the ACM SIGKDD Int’l Conference on Knowledge Discovery & Data Mining 1721 (2015), https://www.microsoft.com/en-us/research/publication/intelligible-models-healthcare-predicting-pneumonia-risk-hospital-30-day-readmission/.

11. See, e.g., Am. Med. Ass’n, Report of Board of Trustees, B of T Report 41-A-18 (June 2018), https://www.ama-assn.org/sites/default/files/media-browser/public/hod/a18-bot41.pdf.

12. Cliff Kuang, Can A.I. Be Taught to Explain Itself?, N.Y. Times Mag. (Nov. 21, 2017), https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html.

13. Jonathan Kay, How Do You Regulate a Self-Improving Algorithm?, The Atl. (Oct. 25, 2017), https://www.theatlantic.com/technology/archive/2017/10/algorithms-future-of-health-care/543825/.

14. Omar Ford, Is FDA Warming Up to AI?, MD+DI (Feb. 14, 2018), https://www.mddionline.com/stroke-application-nod-shows-fda-growing-comfortable-ai.

15. Latrice G. Landry & Heidi L. Rehm, Association of Racial/Ethnic Categories with the Ability of Genetic Tests to Detect a Cause of Cardiomyopathy, 3 JAMA Cardiol. 341 (2018).

16. See Monica Heger, Experts Discuss Need for Diversity in Precision Medicines, Potential for Bias in AI, Genomeweb (Mar. 16, 2018), https://www.genomeweb.com/informatics/experts-discuss-need-diversity-precision-medicine-potential-bias-ai#.Wwk2sEgvxPY.

17. See Rimma Pivovarov et al., Identifying and Mitigating Biases in EHR Laboratory Tests, 51 J. Biomed. Info. 24 (2014); Travers Ching et al., Opportunities and Obstacles for Deep Learning in Biology and Medicine, 15 J. Royal Soc’y Interface 20170387 (Apr. 2018).

18. Muhammad Ahmad et al., Death vs. Data Science: Predicting End of Life, KenSci, Inc., https://app.leadsius.com/viewer/lp?l=RnVJYUpxYz0=; see also Anand Avati et al., Improving Palliative Care with Deep Learning, 2017 IEEE Int’l Conference on Bioinformatics & Biomedicine (Nov. 2017), https://arxiv.org/pdf/1711.06402.pdf.

19. Jay Stanley, Pitfalls of Artificial Intelligence Decisionmaking Highlighted in Idaho ACLU Case, ACLU (June 2, 2017), https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.

20. Antonio Regalado, A DNA Detective Has Used Genealogy to Point Police to Three More Suspected Murderers, MIT Tech. Rev. (June 26, 2018), https://www.technologyreview.com/the-download/611548/a-dna-detective-has-used-genealogy-to-point-police-to-three-more-suspected/.

21. Avi Selk, The Ingenious and “Dystopian” DNA Technique Police Used to Hunt the “Golden State Killer” Suspect, Wash. Post, (Apr. 28, 2018), https://www.washingtonpost.com/news/true-crime/wp/2018/04/27/golden-state-killer-dna-website-gedmatch-was-used-to-identify-joseph-deangelo-as-suspect-police-say/?noredirect=on&utm_term=.ec5cff900300.

22. 21st Century Cures Act, Pub. L. No. 114-255, § 2012, 130 Stat. 1049 (Dec. 2016).

23. IHS Markit Ltd., The Complexities of Physician Supply and Demand: Projections from 2016 to 2030; 2018 Update (Mar. 2018), https://aamc-black.global.ssl.fastly.net/production/media/filer_public/85/d7/85d7b689-f417-4ef0-97fb-ecc129836829/aamc_2018_workforce_projections_update_april_11_2018.pdf.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Tom Lawry, Steve Mutkoski, and Nathan Leong

Tom Lawry ([email protected]) is Director, Microsoft Worldwide Health, Analytics & AI. Steve Mutkoski ([email protected]) is Government Affairs Director at Microsoft Worldwide Health. Nathan Leong ([email protected]) is Lead Counsel, Microsoft U.S. Health & Life Sciences.