How to Interpret the Data
Obtaining the data is only half the battle. Another core tool in the white paper is the template section “Unique Measurement and Analysis Considerations,” which guides hospitalists as they attempt to verify the validity of their data and ensure valid comparisons.
Dr. Westle’s group has studiously tracked its performance metrics for years; other groups may have little experience in this domain. Another critical step in creating dashboard reports, he states, is understanding how the data are collected and ensuring the data are accurate and attributed appropriately.
“The way clinical cases are coded ought to be the subject of some concern and scrutiny,” says John Novotny, MD, director of the Section of Hospital Medicine of the Allen Division at Columbia University Medical Center in New York City and another Benchmarks Committee member. “There may be a natural inclination to accept the performance information provided to us by the hospital, but the processes that generated these data need to be well understood to gauge the accuracy and acceptability of any conclusions drawn.”
With a background in statistics and information technology, Dr. Novotny cautions that “some assessment of the validity of comparisons within or between groups or to benchmark figures should be included in every analysis or report—to justify any conclusions drawn and to avoid the statistical pitfalls common to these data.”
He advises HMGs to run the numbers by someone with expertise in data interpretation, especially before reports are published or submitted for public review. These issues come up frequently in the analysis of frequency data, such as the number of deaths occurring in a group for a particular diagnosis over a period of time, where the numbers might be relatively small.
For example, if five deaths are observed in a subset of 20 patients, the statistic of a 25% death rate comes with such low precision that the true underlying death rate might fall anywhere between 8% and 50%.
“This is a limitation inherent in drawing conclusions from relatively small data sets, akin to driving down a narrow highway with a very loose steering wheel—avoiding the ravines is a challenge,” he says.
Dr. Novotny contributed the section on mortality metrics for the white paper. Although a group’s raw mortality data may be easily obtained, “HMGs should be wary of the smaller numbers resulting from stratifying the data by service, DRG [diagnosis-related group], or time periods,” he explains.
Instead, as suggested in the “Interventions” section, the HMG might want to take the additional approach of documenting the use of processes thought to have a positive impact on the risk of mortality in hospitalized patients. Potentially useful processes under development and discussion in the literature include interdisciplinary rounds, effective inter-provider communication, and ventilator care protocols, among others.
“We need to show that not only do we track our mortality figures, we analyze and respond to them by improving our patient care,” Dr. Novotny says. “We need to show that we’re making patient care safer.”
At the Ochsner Health Center in New Orleans, the HMG decided to track readmission rates for congestive heart failure—the primary DRG for inpatient care, and compare its rates with those of other services. Because heart failure is traditionally the bailiwick of cardiology, “you might think that the cardiology service would have the best outcomes,” says Steven Deitelzweig, MD, vice president of medical affairs and system chairman.
But, using order sets that align with JCAHO standards and best care as demonstrated by evidence in cardiology, Dr. Deitelzweig’s hospitalist group “was able to demonstrate statistically and objectively that our outcomes were better, adjusting for case mix.”