Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status.
View Article and Find Full Text PDFThe purpose of this article is to provide a commentary on the current state of several measurement issues pertaining to curriculum-based measures of reading (R-CBM). We begin by providing an overview of the utility of R-CBM, followed by a presentation of five specific measurements considerations: 1) the reliability of R-CBM oral reading fluency, 2) issues pertaining to form effects, 3) the generalizability of scores from R-CBM, 4) measurement error, and 5) linearity of growth in R-CBM. We then conclude with a presentation of the purpose for this issue and broadly introduce the articles in the special issue.
View Article and Find Full Text PDF