When Rajeev Alexander, MD, lead hospitalist at Oregon Medical Group with PeaceHealth of Eugene, Ore., sought out an ophthalmologist, he didn’t go to provider Web sites or directory pages. He did what most healthcare patients do: He asked around.
Some of the nurses at work gave him suggestions. “‘This guy does a lot of LASIK and might push it.’ Or, ‘This guy has good relationships with patients.’ Or, ‘This is the guy I’d send my husband to,’” Dr. Alexander explains. “That helped.”
Were he to recommend a hospital, Dr. Alexander says he would base his selection on one major criterion: the collegiality of the facility’s doctors, pharmacists, and nurses. “If all the specialists in the hospital are talking to each other, and if they feel they can trust each other,” he says, “then I think you’re going to get good care.”
Dr. Alexander never mentions checking the performance of the physician or hospital he may use. It seems he’s not alone. In recent years, some famous cases have brought attention to how infrequently patients actually consult the available quality data when selecting a provider.
It’s unlikely, for example, that Sen. Ted Kennedy (D-Mass.) researched provider collegiality as a quality measure when he chose a neurosurgeon at Duke University Medical Center to remove his malignant glioma. In 2004, when President Clinton needed his quadruple coronary bypass operation, he used an average-rated New York cardiologist. Why? Because, according to Dr. Jha, he didn’t compare quality reports.
Physicians are just as guilty of ignoring the information. Audience feedback at a hospital medicine continuing medical education course demonstrated to Robert M. Wachter, MD, chief of the division of hospital medicine and chief of the medical service at the University of California San Francisco Medical Center, that even members of UCSF’s Epidemiology and Biostatistics Department do not consult quality data before making medical decisions for themselves or a loved one. “Patients won’t start using quality data until we do it ourselves,” Dr. Wachter writes in his blog, Wachter’s World (www.wachtersworld.com). “Best guess: three to five years.”
So how much progress have we really made in using publicly reported data to pick individual providers and hospitals? What should be measured in the future as it affects hospitalist practice? How can hospitalists influence the types of data collected?
Along the Continuum
The problem isn’t because people don’t know about the data. More than one-quarter (26%) of consumers who participated in a 2002 Harris poll said they are aware of hospital report cards, Dr. Wachter writes in his book Internal Bleeding: The Truth Behind America’s Terrifying Epidemic of Medical Mistakes. However, only 3% considered changing their care based on those ratings, and only 1% actually made a change.1
Those who do consult the data seem to benefit—at least that is the case for users of New York state’s public reporting system for coronary artery bypass surgery. Dr. Jha and Arnold Epstein, MD, professor and chair of the department of Health Policy and Management at Harvard’s School of Public Health, found users who picked a top-performing hospital or surgeon had approximately half the mortality risk as did those who selected from the bottom quartile.2
But it is unusual for patients to choose a hospital based on publicly reported information alone, and Dr. Jha believes it’s largely due to mindset. “People are not used to approaching healthcare the way they would walk into a car dealership, for instance, ready to do battle,” he says.
Even if physicians and patients don’t consistently use the data, publishing it still has value. It helps physicians gauge their professional status, for example. “If someone is not looking good,” Dr. Jha says, “it is a huge impetus to improve, as long as you believe that what you are measuring is really associated with quality.”