262153
A comparison of methods for measuring fidelity of implementation 0utcomes
Monday, October 29, 2012
: 10:30 AM - 10:45 AM
Seow Ling Ong, MA
,
Research Dept., ETR Associates, Inc., Scotts Valley, CA
Jill R. Glassman, PhD, MSW
,
Research, ETR Associates, Scotts Valley, CA
Implementation fidelity is an important variable to measure, and measure accurately, when evaluating evidence-based programs (EBPs), particularly when implementation fidelity is the outcome of a study, such as when evaluating training programs for educators implementing EBPs. The simplest and most commonly used data sources for measuring implementation fidelity are educator self-report implementation logs. These typically ask educators to indicate which activities in a lesson they implemented and what type of changes they made. However, these self-report measures are fraught with error and often overestimate implementation fidelity. Other measures such as interviews and in-person observations are sometimes used, but are more burdensome and costly. Through a NIH-funded SBIR, we evaluated an online training program designed to improve the implementation fidelity of teachers implementing the Reducing the Risk program. Two-hundred nineteen educators across the United States were randomly assigned to either the intervention (training) or control (standard preparation using only the teaching guide) condition. Our primary outcome was increased implementation fidelity. We used several methods to measure implementation validity: online implementation logs for all 16 lessons, interviews with 24 educators within 2 days of implementing one of 4 key lessons, observations of 25 educators implementing one of 4 key lessons, and coding of audio-tapes of each of the 4 key lessons. We present a comparison of results from each of these methods, including similarities and differences in fidelity scores using each method, qualitative differences in results, resources required of each method, and contributions of each to an overall picture of fidelity.
Learning Areas:
Administer health education strategies, interventions and programs
Conduct evaluation related to programs, research, and other areas of practice
Implementation of health education strategies, interventions and programs
Planning of health education strategies, interventions, and programs
Learning Objectives: List methods for collecting fidelity of implementation data.
Identify strengths and limitations of each method.
Keywords: Evaluation, Curricula
Presenting author's disclosure statement:Qualified on the content I am responsible for because: I have been Senior Evaluator or co-principal of multiple fderally funded grants focusing on the development or evaluation of health prevention/promotion programs and curricula. One particular interest has been the development of strategies for measuring implementation fidelity. I have had doctoral-level training in evaluation methodology and instrument development.
Any relevant financial relationships? No
I agree to comply with the American Public Health Association Conflict of Interest and Commercial Support Guidelines,
and to disclose to the participants any off-label or experimental uses of a commercial product or service discussed
in my presentation.
|