Visual inspection of single-subject data is the primary method for behavior analysts to interpret the effect of an independent variable on a dependent variable; however, there is no consensus on the most suitable method for teaching graph construction for single-subject designs. We systematically replicated and extended Tyner and Fienup (2015) using a repeated-measures between-subjects design to compare the effects of instructor-led, video-model, and no-instruction control tutorials on the graphing performance of 81 master's students with some reported Microsoft Excel experience. Our mixed-design analysis revealed a statistically significant main effect of pretest, tutorial, and posttest submissions for each tutorial group and a nonsignificant main effect of tutorial group.
View Article and Find Full Text PDFJ Appl Behav Anal
September 2021
Behavior analysts commonly use visual inspection to analyze single-case graphs, but studies on its reliability have produced mixed results. To examine this issue, we compared the Type I error rate and power of visual inspection with a novel approach-machine learning. Five expert visual raters analyzed 1,024 simulated AB graphs, which differed on number of points per phase, autocorrelation, trend, variability, and effect size.
View Article and Find Full Text PDFPrior research found that without the naming cusp, children did not learn from instructional demonstrations presented before learn units (IDLUs) (i.e., modeling an expected response twice for a learner prior to delivering an instructional antecedent), however, following the establishment of naming, they could.
View Article and Find Full Text PDF