The growth of school-based initiatives incorporating multitiered systems of support (MTSS) for social, emotional, and behavioral domains has fueled interest in behavioral assessment. These assessments are foundational to determining risk for behavioral difficulties, yet research to date has been limited with regard to when and how often to administer them. The present study evaluated these questions within the framework of behavioral stability and examined the extent to which behavior is stable when measured by two school-based behavioral assessments: the Direct Behavior Rating-Single-Item Scales (DBR-SIS), and the Behavioral and Emotional Screening System (BESS).
View Article and Find Full Text PDFResearch has supported the applied use of Direct Behavior Rating Single-Item Scale (DBR-SIS) targets of "academic engagement" and "disruptive behavior" for a range of purposes, including universal screening and progress monitoring. Though useful in evaluating social behavior and externalizing problems, these targets have limited utility in evaluating emotional behavior and internalizing problems. Thus, the primary purpose of this study was to support the initial development and validation of a novel DBR-SIS target of "unhappy," which was intended to tap into the specific construct of depression.
View Article and Find Full Text PDFReliable and valid data form the foundation for evidence-based practices, yet surprisingly few studies on school-based behavioral assessments have been conducted which implemented one of the most fundamental approaches to construct validation, the multitrait-multimethod matrix (MTMM). To this end, the current study examined the reliability and validity of data derived from three commonly utilized school-based behavioral assessment methods: Direct Behavior Rating - Single Item Scales, systematic direct observations, and behavior rating scales on three common constructs of interest: academically engaged, disruptive, and respectful behavior. Further, this study included data from different sources including student self-report, teacher report, and external observers.
View Article and Find Full Text PDFResponsive service delivery frameworks rely on the use of screening approaches to identify students in need of support and to guide subsequent assessment and intervention efforts. However, limited empirical investigations have been directed to informing how often screening should occur for social, emotional, and behavioral difficulties in school settings. The purpose of the current study was to evaluate the stability of risk status on 3 different screening instruments across 3 administrations across the course of a school year.
View Article and Find Full Text PDFOver the past 3 decades, there has been an unprecedented increase in students identified as eligible for special education as a result of students meeting criteria for autism spectrum disorder (ASD). The increasing number of students with ASD in the schools presents significant challenges to teachers, school psychologists, and other school professionals working with this population. Although there is considerable research addressing assessment, identification, and support services for children with ASD, there is a need for further research focused on these topics within the school context.
View Article and Find Full Text PDFCounterbalancing treatment order in experimental research design is well established as an option to reduce threats to internal validity, but in educational and psychological research, the effect of varying the order of multiple tests to a single rater has not been examined and is rarely adhered to in practice. The current study examines the effect of test order on measures of student behavior by teachers as raters utilizing data from a behavior measure validation study. Using multilevel modeling to control for students nested within teachers, the effect of rating an earlier measure on the intercept or slope of a later behavior assessment was statistically significant in 22% of predictor main effects for the spring test period.
View Article and Find Full Text PDFThe purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months.
View Article and Find Full Text PDFThe purpose of this investigation was to evaluate the models for interpretation and use that serve as the foundation of an interpretation/use argument for the Social and Academic Behavior Risk Screener (SABRS). The SABRS was completed by 34 teachers with regard to 488 students in a Midwestern high school during the winter portion of the academic year. Confirmatory factor analysis supported interpretation of SABRS data, suggesting the fit of a bifactor model specifying 1 broad factor (General Behavior) and 2 narrow factors (Social Behavior [SB] and Academic Behavior [AB]).
View Article and Find Full Text PDFThe purpose of this study was to examine the relation between teacher-implemented screening measures used to identify social, emotional, and behavioral risk. To this end, 5 screening options were evaluated: (a) Direct Behavior Rating - Single Item Scales (DBR-SIS), (b) Social Skills Improvement System - Performance Screening Guide (SSiS), (c) Behavioral and Emotional Screening System - Teacher Form (BESS), (d) Office discipline referrals (ODRs), and (e) School nomination methods. The sample included 1974 students who were assessed tri-annually by their teachers (52% female, 93% non-Hispanic, 81% white).
View Article and Find Full Text PDFThe purpose of this study was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) targets of disruptive, engaged, and respectful behavior within school-based universal screening. Participants included 31 first-, 25 fourth-, and 23 seventh-grade teachers and their 1108 students, sampled from 13 schools across three geographic locations (northeast, southeast, and midwest). Each teacher rated approximately 15 of their students across three measures, including DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994).
View Article and Find Full Text PDFAm J Speech Lang Pathol
August 2013
Purpose: The purpose of this study was to examine the reliability of, and sources of variability in, language measures from interviews collected from young school-age children.
Method: Two 10-min interviews were collected from 20 at-risk kindergarten children by an examiner using a standardized set of questions. Test-retest reliability coefficients were calculated for 8 language measures.
Direct Behavior Rating (DBR) is a repeatable and efficient method of behavior assessment that is used to document teacher perceptions of student behavior in the classroom. Time-series data can be graphically plotted and visually analyzed to evaluate patterns of behavior or intervention effects. This study evaluated the decision accuracy of novice raters who were presented with single-phase graphical plots of DBR data.
View Article and Find Full Text PDFThe purpose of this study was to investigate how Direct Behavior Rating Single Item Scales (DBR-SIS) involving targets of academically engaged, disruptive, and respectful behaviors function in school-based screening assessment. Participants included 831 students in kindergarten through eighth grades who attended schools in the northeastern United States. Teachers provided behavior ratings for a sample of students in their classrooms on the DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994).
View Article and Find Full Text PDFThe purpose of the current investigation was to develop and provide initial validation of the Social and Academic Behavior Risk Screener (SABRS). Research was conducted in southeast elementary schools with 54 teacher and 243 student participants. An initial item pool was created through review of developmental research on the trajectory of behavior problems and competencies, as well as various models of social, emotional, and academic competence.
View Article and Find Full Text PDFAlthough treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales.
View Article and Find Full Text PDFThis study examined the impact of various components of rater training on the accuracy of rating behavior using Direct Behavior Rating-Single Item Scales (DBR-SIS). Specifically, the addition of frame-of-reference and rater error training components to a standard package involving an overview and then modeling, practice, and feedback was investigated. In addition, amount of exposure to the direct training component (i.
View Article and Find Full Text PDFThis study presents an evaluation of the diagnostic accuracy and concurrent validity of Direct Behavior Rating Single Item Scales for use in school-based behavior screening of second-grade students. Results indicated that each behavior target was a moderately to highly accurate predictor of behavioral risk. Optimal universal screening cut scores were also identified for each scale, with results supporting reduced false positive rates through the simultaneous use of multiple scales.
View Article and Find Full Text PDFA total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period).
View Article and Find Full Text PDF