Program evaluators have paid little attention in the literature to the manner in which measuring the quality of implementation with observations requires tradeoffs between rigor (reliability and validity) and program evaluation feasibility. We present a case example of how we addressed rigor in light of feasibility concerns when developing and conducting observations for measuring the quality of implementation of a small education professional development program. We discuss the results of meta-evaluative analyses of the reliability of the quality observations, and we present conclusions about conducting observations in a rigorous and feasible manner. The results show that the feasibility constraints that we faced did not notably reduce the rigor of our methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.evalprogplan.2014.02.003 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!