November 01, 2017

AI and Medicine : How Fast Will Adaptation Occur?

By Matthew Henshon

Artificial intelligence (AI) burst onto the popular scene in 2011, when IBM’s Watson defeated two human champions (including all-time leader Ken Jennings) in a nationally televised two-part exhibition of Jeopardy!, the TV game show.1 A previous Watson iteration (Deep Blue) had defeated then world champion Garry Kasparov in chess in 1997, but the game of chess perhaps seemed a simpler task for machines: a defined board, and 16 pieces on each side.2 In contrast, the range of Jeopardy! clues (remember, as Alex Trebek reminds viewers regularly, to “phrase your response in the form of a question”) is seemingly limitless, and the clues are often in the form of puns or slang, so a chess move like “rook to D1” in comparison seems simple.

To its credit, IBM has built an entire marketing campaign around Watson’s victory on Jeopardy! But the methodology that enabled Watson to excel in the quiz show was perhaps unique: the machine first tries to identify a keyword in the clue, then compares that word against its (then) database of 15-terabytes of information. In Ken Jennings’s words:

It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels “sure” enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.3

Applying Watson to fields like medicine has been a bit rougher. IBM signed a high-profile partnership with the University of Texas MD Anderson Cancer Center in Houston in 2012, declaring it a “moon shot” to cure cancer in a press release.4 But five years later, with progress significantly slower than initially anticipated, MD Anderson and IBM have parted ways. The university, which paid IBM a total of $39 million on a contract originally negotiated for less than 10 percent of that amount, had nothing to show for its money, except a cancer-screening tool that was still in the “pilot” stage.5

The problem for Watson and medicine may be related to its success in Jeopardy! In the game show, the correct answers are “known,” so Watson sifts through data and tries to find the right one. And if the machine does not pick a winner, it can adjust its algorithm (so-called “machine learning”). But in medicine, it is perhaps harder to find the single correct answer. Computers excel at working with “structured data,” such as billing codes or lab test results; but sometimes human medical judgment and doctor’s notes are just as important in making a diagnosis, and those are much harder for a computer to analyze.6

But while IBM’s Watson healthcare efforts appear (for the moment) to be retrenching,7 other players are aggressively entering into the medical AI market. In 2014, Google acquired London-based DeepMind, for $400 million.8 Among other research projects, DeepMind has developed a program that plays Go (the Asian board game that is more complicated than chess) and has begun to regularly beat the best players in the world, even when five Go champions combined their efforts to try and defeat the program!

Like Watson, Google’s DeepMind is attempting to apply its technology to health care: last November, it announced a partnership with a London hospital system.9 But DeepMind’s Streams app appears to be built around much more rudimentary AI, and its primary benefit at this point appears to be streamlining the process of notification for blood tests indicating acute kidney injuries (AKIs).10 While AKI is one of the leading causes of death in the National Health Service (NHS), the “special sauce” at this point appears to be simply routing abnormal blood test results to the appropriate doctor’s mobile device.

The lesson learned may be that applying AI to real-world problems requires small steps that can supplement and enhance—rather than replace—human decision making. Streams is not attempting to replace doctors and specialists at this point—merely get them key information faster. Another factor is that the move to full electronic health records began only about 10 years ago, and is still in process; AI will get better as it has more data to evaluate. One site that is currently analyzing healthcare data is Modernizing Medicine, which uses a tablet and data provided from 3,700 doctors on over 14 million patient visits to recommend treatments or drugs based on symptoms, much like Netflix suggesting a new movie.11

We also may have to revise our view of what AI will do: the apparent early promise of Watson was in finding a single “cure for cancer.” But a more promising side of AI may be in simply helping patients manage their own conditions. For instance, type 2 diabetes can often be managed—and in some cases reversed—by controlling the patient’s diet and lifestyle. The problem is that such control requires extensive oversight, from a seemingly full-time doctor in the home. But with smartphones and home monitoring devices like Fitbit, the patient can provide such information in real-time, to be integrated into a larger database. The doctor can then quickly assess the changing conditions of the patient. Preliminary testing of an app-based system by one company (Virta Health) has shown that 87 percent of the type 2 diabetic patients in the study reduced their insulin dose or eliminated it outright.12

Medical care is not the only arena that AI hopes to move into: games like chess and Go are supposed to be a “test bed” (to use the industry term) for legal work, crime prevention, and business negotiations, among others. But as one IBM AI researcher said, “There are precious few zero-sum, perfect-information, two-player games that we compete in in the real world.”13

The pace of adaptation in medicine relates to the nature of the test-bed AI itself: namely, the gaming world. There’s little real-world consequence (other than to Garry Kasparov himself) in a chess game. There are no lives at risk, and a mistake might lead to the early loss of a rook. But in medicine, and other real-world events, mistakes have consequences. A missed AKI marker by DeepMind means the life of a real patient is potentially at risk. Thus, there is a natural tendency to be conservative with AI algorithms: the cost of a false positive (the equivalent of a false alarm) is low; the cost of a false negative can be catastrophic.

AI will continue to progress with each advance in semiconductors; note the computing power in your smartphone is more than that of Deep Blue 20 years ago. But getting to the next stage, where we rely on AI to make judgments on life-and-death decisions, may take longer than we currently anticipate. The incremental steps shown by Streams, Virta Health, Modernizing Medicine, and others may be more promising—and more successful in the short to medium term—than a “moon shot.” u

Endnotes

1. Ken Jennings, My Puny Human Brain, Slate (Feb. 16, 2011), http://www.slate.com/articles/arts/culturebox/2011/02/my_puny_human_brain.html.

2. Kasparov vs. Deep Blue, NPR (Aug. 8, 2014), http://www.npr.org/2014/08/08/338850323/kasparov-vs-deep-blue.

3. Jennings, supra note 1.

4. Press Release, IBM, MD Anderson Taps IBM Watson to Power “Moon Shots” Mission Aimed at Ending Cancer, Starting with Leukemia (Oct. 18, 2013), https://www-03.ibm.com/press/us/en/ pressrelease/42214.wss.

5. David H. Freedman, A Reality Check for IBM’s AI Ambitions, MIT Tech. Rev. (June 27, 2017), https://www.technologyreview.com/s/607965/a-reality-check-for-ibms-ai-ambitions/.

6. Daniela Hernandez, Artificial Intelligence Is Now Telling Doctors How to Treat You, Wired (June 2, 2014), https://www.wired.com/2014/06/ai-healthcare/.

7. Indeed, even IBM’s more recent press releases seem more modest: “[IBM’s Watson will be] collaborating with more than a dozen leading cancer institutes to accelerate the ability of clinicians to identify and personalize treatment options for their patients.” Press Release, IBM, Clinicians Tap Watson to Accelerate DNA Analysis and Inform Personalized Treatment Options for Patients (May 5, 2015), https://www-03.ibm.com/press/us/en/pressrelease/46748.wss. The institutes include Ann & Robert H. Lurie Children’s Hospital of Chicago; BC Cancer Agency; City of Hope; Cleveland Clinic; Duke Cancer Institute; Fred & Pamela Buffett Cancer Center in Omaha, Nebraska; McDonnell Genome Institute at Washington University in St. Louis; New York Genome Center; Sanford Health; University of Kansas Cancer Center; University of North Carolina Lineberger Comprehensive Cancer Center; University of Southern California Center for Applied Molecular Medicine; University of Washington Medical Center; and Yale Cancer Center.

8. Oliver Roeder, The Bots Beat Us. Now What?, FiveThirtyEight (July 10, 2017), https://fivethirtyeight.com/features/the-bots-beat-us-now-what/.

9. Mustafa Suleyman, A Milestone for DeepMind Health and Streams, DeepMind (Feb. 27, 2017), https://deepmind.com/blog/milestone-deepmind-health-and-streams/.

10. Streams in NHS Hospitals, DeepMind, https://deepmind.com/applied/deepmind-health/working-nhs/how-were-helping-today/ (last visited Oct. 17, 2017).

11. Hernandez, supra note 6.

12. Kevin Maney, How Artificial Intelligence Will Cure America’s Sick Health Care System, Newsweek (May 24, 2017), http://www.newsweek.com/2017/06/02/ai-cure-america-sick-health-care-system-614583.html.

13. Roeder, supra note 8.

Entity:
Topic:

By Matthew Henshon

Matthew Henshon (mhenshon@henshon.com) is a partner at the Boston boutique law firm of Henshon Klein LLP. He is chair of the Artificial Intelligence and Robotics Committee. Follow him on Twitter at @mhenshon.