Continuing controversy
The American Hospital Association – which has had strong concerns about the methodology and the usefulness of hospital star ratings – is pushing back on some of the changes to the system being considered by CMS. In its submitted comments, AHA supported only three of the 14 potential star ratings methodology changes being considered. AHA and the American Association of Medical Colleges, among others, have urged taking down the star ratings until major changes can be made.
“When the star ratings were first implemented, a lot of challenges became apparent right away,” said Akin Demehin, MPH, AHA’s director of quality policy. “We began to see that those hospitals that treat more complicated patients and poorer patients tended to perform more poorly on the ratings. So there was something wrong with the methodology. Then, starting in 2018, hospitals began seeing real shifts in their performance ratings when the underlying data hadn’t really changed.”
CMS uses a statistical approach called latent variable modeling. Its underlying assumption is that you can say something about a hospital’s underlying quality based on the data you already have, Mr. Demehin said, but noted “that can be a questionable assumption.” He also emphasized the need for ratings that compare hospitals that are similar in size and model to each other.
Suparna Dutta, MD, division chief, hospital medicine, Rush University, Chicago, said analyses done at Rush showed that the statistical model CMS used in calculating the star ratings was dynamically changing the weighting of certain measures in every release. “That meant one specific performance measure could play an outsized role in determining a final rating,” she said. In particular the methodology inadvertently penalized large hospitals, academic medical centers, and institutions that provide heroic care.
“We fundamentally believe that consumers should have meaningful information about hospital quality,” said Nancy Foster, AHA’s vice president for quality and patient safety policy at AHA. “We understand the complexities of Hospital Compare and the challenges of getting simple information for consumers. To its credit, CMS is thinking about how to do that, and we support them in that effort.”
Getting a handle on quality
Hospitalists are responsible for ensuring that their hospitals excel in the care of patients, said Julius Yang, MD, hospitalist and director of quality at Beth Israel Deaconess Medical Center in Boston. That also requires keeping up on the primary public ways these issues are addressed through reporting of quality data and through reimbursement policy. “That should be part of our core competencies as hospitalists.”
Some of the measures on Hospital Compare don’t overlap much with the work of hospitalists, he noted. But for others, such as for pneumonia, COPD, and care of patients with stroke, or for mortality and 30-day readmissions rates, “we are involved, even if not directly, and certainly responsible for contributing to the outcomes and the opportunity to add value,” he said.
“When it comes to 30-day readmission rates, do we really understand the risk factors for readmissions and the barriers to patients remaining in the community after their hospital stay? Are our patients stable enough to be discharged, and have we worked with the care coordination team to make sure they have the resources they need? And have we communicated adequately with the outpatient doctor? All of these things are within the wheelhouse of the hospitalist,” Dr. Yang said. “Let’s accept that the readmissions rate, for example, is not a perfect measure of quality. But as an imperfect measure, it can point us in the right direction.”
Jose Figueroa, MD, MPH, hospitalist and assistant professor at Harvard Medical School, has been studying for his health system the impact of hospital penalties such as the Hospital Readmissions Reduction Program on health equity. In general, hospitalists play an important role in dictating processes of care and serving on quality-oriented committees across multiple realms of the hospital, he said.
“What’s hard from the hospitalist’s perspective is that there don’t seem to be simple solutions to move the dial on many of these measures,” Dr. Figueroa said. “If the hospital is at three stars, can we say, okay, if we do X, Y, and Z, then our hospital will move from three to five stars? Some of these measures are so broad and not in our purview. Which ones apply to me as a hospitalist and my care processes?”
Dr. Dutta sits on the SHM Policy Committee, which has been working to bring these issues to the attention of frontline hospitalists. “Hospitalists are always going to be aligned with their hospital’s priorities. We’re in it to provide high-quality care, but there’s no magic way to do that,” she said.
Hospital Compare measures sometimes end up in hospitalist incentives plans – for example, the readmission penalty rates – even though that is a fairly arbitrary measure and hard to pin to one doctor, Dr. Dutta explained. “If you look at the evidence regarding these metrics, there are not a lot of data to show that the metrics lead to what we really want, which is better care for patients.”
A recent study in the British Medical Journal, for example, examined the association between the penalties on hospitals in the Hospital Acquired Condition Reduction Program and clinical outcome.1 The researchers concluded that the penalties were not associated with significant change or found to drive meaningful clinical improvement.