157857
Comparing Hospital Performance: Developing more appropriate method using Peer Groups
Monday, November 5, 2007: 3:00 PM
Samuel Ogunbo, PhD
,
Research & Development, Quality Indicator Project / Maryland Hospital Association, Elkridge, MD
Nikolas Matthes, MD, PhD, MPH
,
Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD
Jacob Jen-Hao Cheng, PhD, MS
,
Research & Development, Quality Indicator Project / Maryland Hospital Association, Elkridge, MD
Carlos Alzola, MS
,
Data Insights, Vienna, VA
As public reporting and pay-for-performance for hospitals based on the National Hospital Quality (NHQ) Measures gains momentum, questions arise about appropriate comparisons of hospitals. Research shows that hospital bedsize, region, or teaching status can predict differences in performance, yet current methodologies continue to compare hospitals to national and state aggregates. Given the knowledge of variation in health care practices, current approaches may not be appropriate. The Quality Indicator Project® conducted a retrospective analysis of Acute Myocardial Infarction (AMI), Heart Failure (HF), Pneumonia (PN) and Surgical Infection Prevention (SIP) on NHQ Measure data collected from 442 customer hospitals during 2005 and 2006. Principal Component Analysis and Analysis of Variance (ANOVA) suggested that bedsize, teaching status, type, setting, and region are the most important hospital characteristics explaining variation in performance. We used Factor Analysis to decompose relevant hospital characteristics and Cluster Analysis to stratify hospitals into four peer groups. Peer Group 1 consisted predominantly of hospitals with more than 200 beds, Peer Group 2 included mostly non-teaching hospitals, all of the hospitals in Peer Group 3 are located in the Midwest, and Peer Group 4 consisted of Critical Access Hospitals. Peer Group 3 had the highest mean performance across all of the clinical conditions (AMI, HF, PN and SIP), while the Peer Group 4 had the lowest scores in all but AMI. In addition, when mean performance for AMI, HF, PN and SIP measure sets was compared across the established peer groups, significant difference in performance were apparent. For HF, Peer Groups 1 and 3 had means (0.833, 0.850) that are significantly different from those of Peer Groups 2 and 4 (0.794, 0.790, respectively). The mean performance in PN was highest for Peer Group 3 (0.892) and significantly different from those of Peer Groups 1 and 2 (0.855, 0.865), which also had significantly higher means than Peer Group 4 (0.833). There was no significant difference between the means for Groups 1 and 3 for SIP (0.826 vs. 0.802), but Group 3 had means that are significantly different from those of Peer Groups 2 and 4 (0.782, 0.779, respectively). The results from this study have important implications for pay-for-performance and public reporting, since the differences in hospital performance and their capacity to care for patients is not currently being considered sufficiently in the methods and comparison groups that are used to assess hospitals.
Learning Objectives: Following the presentation, participants will be able to:
• Understand the power and utility of Principal Component and Factor Analysis to develop peer groups for hospitals
• Understand the implications of hospital peer grouping both for pay for performance and public reporting
• Critically assess the limitations of blanket comparisons of performance between hospitals
Keywords: Quality Improvement, Performance Measurement
Presenting author's disclosure statement:Any relevant financial relationships? No Any institutionally-contracted trials related to this submission?
I agree to comply with the American Public Health Association Conflict of Interest and Commercial Support Guidelines,
and to disclose to the participants any off-label or experimental uses of a commercial product or service discussed
in my presentation.
|