Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention observations influenced the magnitude of four SCED effect sizes, including (a) non-overlap of all pairs (NAP), (b) baseline corrected tau (BC-Tau), (c) mean-phase difference (MPD), and (d) generalized least squares (GLS) when applied to hypothetical academic intervention SCED data. Higher levels of measurement error decreased the average magnitude of effect sizes, particularly NAP and BC-Tau. However, the number of intervention observations had minimal impact on the average magnitude of NAP and BC-Tau. Increasing the number of intervention observations dramatically increased the magnitude of GLS and MPD. Increasing the number of baseline observations also tended to increase the average magnitude of MPD. The ratio of baseline to intervention observations had a statistically but not practically significant influence on the average magnitude of NAP, BC-Tau, and GLS. Careful consideration is required when determining the length of time academic SCEDs are conducted and what effect sizes are used to summarize treatment outcomes. This article also highlights the value of using meaningful simulation conditions to understand the performance of SCED effect sizes.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jsp.2024.101347DOI Listing

Publication Analysis

Top Keywords

intervention observations
16
average magnitude
16
number intervention
12
nap bc-tau
12
procedural characteristics
8
measurement error
8
number baseline
8
baseline observations
8
sced sizes
8
magnitude nap
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!