The most common metrics used in quality-incentive programs, based on 45 responses to SHM’s survey:
- 73% of programs use JCAHO heart failure measures;
- 73% use “good citizenship” measures;
- 73% use patient satisfaction measures;
- 67% use JCAHO pneumonia measures;
- 51% use transitions-of-care measures;
- 44% use JCAHO M.I. measures;
- 31% use throughput measures;
- 27% use avoidance of unapproved abbreviations;
- 24% use a measure based on medication reconciliation;
- 11% use 100,000 Lives Campaign measures;
- 9% use readmission rate measures;
- 7% use mortality rate measures; and
- 2% use end-of-life measures.
Recommendations
The prevalence of hospitalist quality-based compensation plans is continuing to grow rapidly, but the details of the plans’ structure will govern whether they benefit our patients, improve the overall value of the care we provide, and serve as a meaningful component of our compensation. I suggest each practice consider implementing plans with the following attributes:
A total dollar amount available for performance that is enough to influence hospitalist behavior. I think quality incentives should compose as much as 15% to 20% of a hospitalist’s annual income. Plans connecting quality performance to equal to or less than 7% of annual compensation (the case for 40% of groups in the above survey) rarely are effective.
Money vs. metrics. It usually is better to establish a plan based on a sliding scale of improved performance rather than a single threshold. For example, if all of the bonus money is available for a 10% improvement in performance, consider providing 10% of the total available money for each 1% improvement in performance.
Degree of difficulty. Performance thresholds should be set so that hospitalists need to change their practices to achieve them, but not so far out of reach that hospitalists give up on them. This can get tricky. Many practices set thresholds that are very easy to reach (e.g., they may be near the current level of performance).
Metrics for which trusted data is readily available. In most cases, this means using data already being collected. Avoid hard-to-track metrics, as they are likely to lead to disagreements about their accuracy.
Group vs. individual measures. Most performance metrics can’t be clearly attributed to one hospitalist as compared to another. For example, who gets the credit or blame for Ms. Smith getting or not getting a pneumovax? The majority of performance metrics are best measured and paid on a group basis. Some metrics, such as documenting medicine reconciliation on admission and discharge, can be effectively attributed to a single hospitalist and could be paid on an individual basis.
Small number of metrics, A meaningfully large amount of money should be connected to each one. Don’t make the mistake of having a $10,000 per doctor annual quality bonus pool divided among 20 metrics (each metric would pay a maximum of $500 per year).
Rotating metrics. Consider an annual meeting with members of your hospital’s administration to jointly establish the metrics used in the hospitalist quality incentive for that year. It is reasonable to change the metrics periodically.
It seems to me P4P programs are in their infancy, and will continue to evolve rapidly. Plans that fail to improve outcomes enough to justify the complexity of implementing, tracking, and paying for them will disappear slowly. (I wonder if payment for pneumovax administration during the hospital stay will be in this category.) And new, more effective, and more valuable programs will be developed.