Hospital medicine pioneer Robert Wachter, MD, MHM, chair of the department of medicine at the University of California San Francisco (UCSF), has tackled some of the big, transformative topics in health care in his published books.
These include medical errors and the search for solutions (“Internal Bleeding: The Truth Behind America’s Terrifying Epidemic of Medical Mistakes,” in 2005); rollout of the electronic health record (“The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age,” in 2015); and, coming up in about 18 months, the place for artificial intelligence (AI) in medicine in a new work tentatively titled “A Giant Leap: How AI Is Transforming Healthcare—and What That Means for Our Future.”
At the 28th annual UCSF Management of the Hospitalized Patient conference, founded by Dr. Wachter and held in San Francisco in October, he devoted a keynote address to describing AI’s looming “Hemingway moment” for hospitalists. In Ernest Hemingway’s 1926 novel, “The Sun Also Rises,” a character is asked to explain how his bankruptcy came about, and he replies, “Two ways. Gradually, then suddenly.”
Healthcare, particularly in the hospital, has seen gradual advances in applications of AI, although more slowly than in many other industries. But it may soon see changes coming at great speed, Dr. Wachter said.
He is largely upbeat about what AI can do, although aware of the darker possibilities, even eventually for the jobs of physicians. “In a few years, I’d say, healthcare will be transformed [by AI], mostly for the better… in ways that will be exciting, a little scary, a little disorienting, but much more quickly than we’re used to.”
Computer tools can now do a variety of tasks they couldn’t do before, such as making predictions and diagnoses, summarizing huge amounts of data and text, and producing videos and pictures. “AI can connect something that it reads in the chart with an image of an echocardiogram, do multi-modal work, and then communicate that in ways that feel like natural language, like it’s talking to you or the patient. That’s all new.”
For now, the tools that are ready for medical prime time mostly work in a single mode, such as using language or using discrete data, he said. But over time, the computer will pick up and learn from all the senses that humans use. The electronic health record (EHR) manufacturing giant Epic and its competitors are now partnering with AI companies to integrate such tools into their systems.
What is AI?
Artificial intelligence refers to a set of computer systems that can perform tasks that previously required human intelligence, such as learning from experience. It also describes machine learning and natural language processing. Dr. Wachter contrasted AI before and after November 30, 2022, the date when OpenAI rolled out its landmark AI tool ChatGPT, “where you could interact with it, and it would get back to you in an almost conversational tone.”
Generative AI, which can create new content, is rapidly being improved upon. He suggested that hospitalists should try to get comfortable practicing with ChatGPT and finding out what it’s good at—and not.
For himself, Dr. Wachter uses ChatGPT 4o and similar tools several times a day to search for information and get answers to questions that would take a lot longer to investigate without it. He offered examples, such as asking for a summary of the published writings of an expert he was planning to interview for his book and a summary of the literature on healthcare bias associated with AI.
In his talk, Dr. Wachter revisited one of medicine’s previous big transformations, from paper medical charts to the digital world of the EHR, which offers a lot of lessons about digital transformation in healthcare and its unanticipated consequences. The EHR transformation happened fairly quickly, but only after the federal government opted to invest $30 billion to get hospitals and physicians to computerize.
“And obviously it worked,” Dr. Wachter said. In 2008 fewer than one in 10 hospitals had an EHR; by 2015 fewer than one in 10 did not. But this transformation often didn’t go smoothly, with doctors feeling demoralized as they became expensive data entry clerks.
“EHR enabled a lot of outside entities to make us do stuff that’s incredibly time-consuming.” To make all that data entry worthwhile, he said, “Maybe I should get useful decision support or guidance to make me a better clinician.” But in many cases, there was remarkably little useful return from the EHR.
Dr. Wachter cited Erik Brynjolfsson, a professor at Stanford University in California, where he directs the Digital Economy Lab, who identified the productivity paradox of information technology.1 The paradox, Brynjolfsson observed in 1993, is it often takes several years for industries to start realizing the promised productivity gains of a new technology. The key to unlocking this productivity paradox, Dr. Wachter said, is not just implementing new and improved technology, which takes a while to mature.
Organizations also need to remodel themselves—a process that Brynjolfsson calls complementary innovation—to allow them to take full advantage of new digital tools. “We just took the EHR and put it into our workflow,” Dr. Wachter said. As a result, the doctor’s note in the EHR looks like a piece of paper filed under a tab, just as it did in the paper chart. In this case, the system failed to take advantage of the potential for new models and paradigms. Will generative AI avoid those pitfalls?
AI in practice
Some of AI’s documented achievements to date include the ability to pass medical board exams, convey perceived empathy to patients and families, briefly summarize important information from extensive hospital charts, write prior authorization requests, and enhance billing processes.
“I think digital scribes are coming soon to hospitals. People are already using them,” Dr. Wachter said. AI predictive tools, such as for sepsis or cardiac arrest or hospital bed availability, haven’t been that great—yet. “We’re beginning to see precision medicine, which has been around the corner for 30 years. When you order anti-platelet agents at UCSF, the EHR will look at the patient’s genetic predictors for which one they’re likely to respond to and give you some guidance.”
There are also a host of ethical issues raised by AI, which are far from resolved. These include disparities and biases that get imported into AI products, along with privacy, security, and legal concerns. There has been a propensity for what are called “AI hallucinations,” where it produces something that sounds reasonable but is completely fabricated. These are happening less with newer systems. “The AI of today is the worst it’s ever going to be,” he said.
Human vigilance will continue to be needed over important medical decisions guided by AI, but humans can become complacent by relying on AI. “Humans will be tasked with being safety bulwarks—which is inherently unsafe,” Dr. Wachter said.2
“I think in areas that are high stakes, like diagnoses or appropriate treatments, the human will be the final arbiter for a considerable period of time. Partly because of medical/legal risk, but partly because we don’t know how to send a bill to anyone unless there’s a human attached to it. But I think you’re going to start seeing more real-time decision support that will be highly relevant to hospitalists rolling out in the next couple of years.”
The future of AI
During his keynote presentation and then in a subsequent Zoom interview with The Hospitalist, Dr. Wachter demonstrated how he uses his AI by speaking a problem into his smartphone connected to GPT software and asking for its help. “I’m a doctor, and I’m about to go in and talk to a patient,” he posed. That hypothetical patient was a 37-year-old woman with a new diagnosis of breast cancer with two positive lymph nodes. She has two small children. “I want to tell her exactly what the diagnosis was and give her some sense of the prognosis, but also leave her with hope.” Dr. Wachter explained. “Can you help coach me on this conversation?”
The disembodied voice responded in seconds with a five-step process for how to approach a hospitalized patient newly diagnosed with breast cancer in a way that has compassion and precision. “It’s so important to approach this with empathy and clarity,” it said. “First, create a private and comfortable setting.”
Then start by gently explaining the diagnosis, using clear and compassionate language, GPT advised. Provide information at a measured pace, in small, digestible amounts. Be honest about the seriousness of the diagnosis, acknowledge its emotional impact, and express the doctor’s commitment to supporting her. Offer emotional support, resources, and next steps.
In his talk, Dr. Wachter acknowledged that much of the potential for AI can be unknown and scary, adding that it’s hard to predict the future of medicine in a world dominated by AI. “I guess my bottom line is that we’re all trying to navigate somewhere between ecstatically excited and terrified.”
He leans toward the excited part. For the next 10 or 15 years, “I think we’re actually going to have something of a golden era in medicine, where a lot of the current tasks that don’t involve practicing at the top of our license, that don’t really take the training and the intellectual firepower of physicians, particularly hospitalists, will be taken off our plates. Or at least we’ll get help with them, and it will allow us to do the things that we are uniquely situated to do better and safer and more efficiently,” he said.
How long that golden era will last, Dr. Wachter doesn’t know. “I’m worried about what medicine is going to look like 30 years from now, and who’s going to have a job.” He has a daughter and a son-in-law now in medical residency training. What will be their future? But in some ways, he said, physicians are better protected than most other professions. “If they don’t have jobs, who else will?”
Larry Beresford is an Oakland, Calif.-based freelance medical journalist.
Disclosure: Dr. Wachter is on the board of directors for The Doctors Company; Second Wave Delivery Solutions; Third Wave Rx; The Josiah Macy Foundation; and Lucian Leape Institute of the Institute for Healthcare Improvement; and on scientific advisory boards for Commure; Cural Health; Forward Health; Notable; and Roon.
References
- Brynjolfsson E. The productivity paradox of information technology. Commun ACM. 1993;36(12):66-77.
- Adler-Milstein J, et al. The limits of clinician vigilance as an AI safety bulwark. JAMA. 2024; 331(14):1173-4.