201093 Part D Plan Ratings: Mixed Methods to Evaluate Part D Organization's Performance and Promote Quality Improvement

Tuesday, November 10, 2009

Ying Wang, PhD , IMPAQ International, Columbia, MD
Tom Young, PhD , Booz Allen Hamilton, Rockville, MD
Sungsoo Oh, MS , Centers for Medicare & Medicaid Services, Baltimore, MD
Arthur Kirsch, PhD , IMPAQ International, Silver Spring, MD
Christopher Powers, PharmD , Centers for Medicare & Medicaid Services, Baltimore, MD
Oswaldo Urdapilleta, PhD , IMPAQ International, Columbia, MD
Vikki Oates , Centers for Medicare & Medicaid Services, Baltimore, MD
Research Objective:

Since the Medicare Part D program started in 2006, CMS embarked on its newest effort- along with its other initiatives monitoring health care providers' quality- to evaluate Part D organizations' performance. This study identifies accurate methods to produce individual and composite star ratings, to be posted on CMS's Medicare Prescription Drug Plan Finder website, which helps beneficiaries' choose Medicare Part D plans.

Study Design:

This study used a mixed-methodology framework evaluating alternative statistical methods. A hybrid approach was used to generate star ratings for various measures by identifying cut-off points in the raw data using adjusted percentile and two-stage cluster analysis and then performing smoothing and gap analyses to create final cut-off points. Principal factor and cluster analyses were contrasted, and sensitivity analyses were performed using different factor rotation model and significance level, to identify the grouping structure of individual metrics for the purpose of constructing composite star ratings in multiple domains. A summary score to represent the plan's overall performance was constructed, which accounts for both the mean and relative variance of performance over measures.

Population Studied:

The study population is 80 Medicare stand-alone prescription drug plans and 569 Medicare Advantage contracts that provided drug benefits in CY2008. Nineteen performance measures were evaluated. The data comes from multiple data sources including CMS administrative and consumer survey data (e.g., CAHPS).

Principal Findings:

Compared to the basic percentile procedure used in prior years, the hybrid approach has resulted in a more dispersed distribution of individual star ratings across contracts. In developing the grouping structure, not all measures were loaded clearly into a three- and four-factor structure. We also observed a low Cronbach Alpha (<0.5) for some groups but high (>0.7) for others. Coupled with a policy decision, the 19 measures were grouped into four different domains: 1) customer service, 2) complaints, 3) customer satisfaction, and 4) drug pricing and patient safety. The summary score was based on the mean of individual measures. This summary measure had a central tendency problem, which was solved by the application of a coefficient representing the variance of performance over measures.

Conclusions:

The hybrid approach is based on statistical models and improves the rigor of the Part D plan rating system. Part D contract's performance tends to fluctuate across different measures, and domains. The methodology rewarding contracts with stable and good performances, builds solid benchmarks, and promotes performance improvement among the plans.

Learning Objectives:
Define the meaningful approach to develop Medicare Part D performance measure star ratings; Compare the measure star rating methods; Evalute Part D organization's perfomance by each measure or across measures; and Identify the perfomance measure benchmarks for Part D contracts' future quality improvement

Presenting author's disclosure statement:

Qualified on the content I am responsible for because: The abstract submission is approved by the study sponsor, and I am directly involved in the project that generates this abstract.
Any relevant financial relationships? No

I agree to comply with the American Public Health Association Conflict of Interest and Commercial Support Guidelines, and to disclose to the participants any off-label or experimental uses of a commercial product or service discussed in my presentation.