Given the amount of time physicians spend entering data, clicking through screens, navigating pages, and logging in to computers, one would have hoped that substantial near-term payback for such efforts would have materialized.
Many of us believed this would take the form of health information exchange – the ability to easily access clinical information from hospitals or clinics other than our own, creating a more complete picture of the patient before us. To our disappointment, true information exchange has yet to materialize. (We won’t debate here whether politics or technology is culpable.) We are left to look elsewhere for the benefits of the digitization of the medical records and other sources of health care knowledge.
Lately, there has been a lot of talk about the promise of machine learning and artificial intelligence (AI) in health care. Much of the resurgence of interest in AI can be traced to IBM Watson’s appearance as a contestant on Jeopardy in 2011. Watson, a natural language supercomputer with enough power to process the equivalent of a million books per second, had access to 200 million pages of content, including the full text of Wikipedia, for Jeopardy.1 Watson handily outperformed its human opponents – two Jeopardy savants who were also the most successful contestants in game show history – taking the $1 million first prize but struggling in categories with clues containing only a few words.
MD Anderson and Watson: Dashed hopes follow initial promise
As a result of growing recognition of AI’s potential in health care, IBM began collaborations with a number of health care organizations to deploy Watson.
In 2013, MD Anderson Cancer Center and IBM began a pilot to develop an oncology clinical decision support technology tool powered by Watson to aid MD Anderson “in its mission to eradicate cancer.” Recently, it was announced that the project – which cost the cancer center $62 million – has been put on hold, and MD Anderson is looking for other contractors to replace IBM.
While administrative problems are at least partly responsible for the project’s challenges, the undertaking has raised issues with the quality and quantity of data in health care that call into question the ability of AI to work as well in health care as it did on Jeopardy, at least in the short term.
Health care: Not as data rich as you might think
“We are not ‘Big Data’ in health care, yet.” – Dale Sanders, Health Catalyst.2
In its quest for Jeopardy victory, Watson accessed a massive data storehouse subsuming a vast array of knowledge assembled over the course of human history. Conversely, for health care, Watson is limited to a few decades of scientific journals (that may not contribute to diagnosis and treatment as much as one might think), claims data geared to billing without much clinical information like outcomes, and clinical data from progress notes (plagued by inaccuracies, serial “copy and paste,” and nonstandardized language and numeric representations), and variable-format reports from lab, radiology, pathology, and other disciplines.
To articulate how data-poor health care is, Dale Sanders, executive vice president for software at Health Catalyst, notes that a Boeing 787 generates 500GB of data in a six hour flight while one patient may generate just 100MB of data in an entire year.2 He pointed out that, in the near term, AI platforms like Watson simply do not have enough data substrate to impact health care as many hoped it would. Over the longer term, he says, if health care can develop a coherent, standard approach to data content, AI may fulfill its promise.
What can AI and related technologies achieve in the near-term?
“AI seems to have replaced Uber as the most overused word or phrase in digital health.” – Reporter Stephanie Baum, paraphrasing from an interview with Bob Kocher, Venrock Partners.3
My observations tell me that we have already made some progress and are likely to make more strides in the coming years, thanks to AI, machine learning, and natural language processing. A few areas of potential gain are:
Clinical documentation
Technology that can derive meaning from words or groups of words can help with more accurate clinical documentation. For example, if a patient has a documented UTI but also has in the record an 11 on the Glasgow Coma Scale, a systolic BP of 90, and a respiratory rate of 24, technology can alert the physician to document sepsis.
Quality measurement and reporting
Similarly, if technology can recognize words and numbers, it may be able to extract and report quality measures (for example, an ejection fraction of 35% in a heart failure patient) from progress notes without having a nurse-abstractor manually enter such data into structured fields for reporting, as is currently the case.