The standardized mean difference (sometimes called Cohen's d) is an effect size measure widely used to describe the outcomes of experiments. It is mathematically natural to describe differences between groups of data that are normally distributed with different means but the same standard deviation. In that context, it can be interpreted as determining several indexes of overlap between the two distributions.
View Article and Find Full Text PDFMultivariate Behav Res
April 2024
Single case experimental designs are an important research design in behavioral and medical research. Although there are design standards prescribed by the What Works Clearinghouse for single case experimental designs, these standards do not include statistically derived power computations. Recently we derived the equations for computing power for (AB) designs.
View Article and Find Full Text PDFRes Synth Methods
January 2024
Conventional random-effects models in meta-analysis rely on large sample approximations instead of exact small sample results. While random-effects methods produce efficient estimates and confidence intervals for the summary effect have correct coverage when the number of studies is sufficiently large, we demonstrate that conventional methods result in confidence intervals that are not wide enough when the number of studies is small, depending on the configuration of sample sizes across studies, the degree of true heterogeneity and number of studies. We introduce two alternative variance estimators with better small sample properties, investigate degrees of freedom adjustments for computing confidence intervals, and study their effectiveness via simulation studies.
View Article and Find Full Text PDFN-of-1 trials, a special case of Single Case Experimental Designs (SCEDs), are prominent in clinical medical research and specifically psychiatry due to the growing significance of precision/personalized medicine. It is imperative that these clinical trials be conducted, and their data analyzed, using the highest standards to guard against threats to validity. This systematic review examined publications of medical N-of-1 trials to examine whether they meet (a) the evidence standards and (b) the criteria for demonstrating evidence of a relation between an independent and an outcome variable per the What Works Clearinghouse (WWC) standards for SCEDs.
View Article and Find Full Text PDFIt is common practice in both randomized and quasi-experiments to adjust for baseline characteristics when estimating the average effect of an intervention. The inclusion of a pre-test, for example, can reduce both the standard error of this estimate and-in non-randomized designs-its bias. At the same time, it is also standard to report the effect of an intervention in standardized effect size units, thereby making it comparable to other interventions and studies.
View Article and Find Full Text PDFBehav Res Methods
October 2023
Currently, the design standards for single-case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB) designs with multiple cases which are common in SCEDs.
View Article and Find Full Text PDFDescriptive analyses of socially important or theoretically interesting phenomena and trends are a vital component of research in the behavioral, social, economic, and health sciences. Such analyses yield reliable results when using representative individual participant data (IPD) from studies with complex survey designs, including educational large-scale assessments (ELSAs) or social, health, and economic survey and panel studies. The meta-analytic integration of these results offers unique and novel research opportunities to provide strong empirical evidence of the consistency and generalizability of important phenomena and trends.
View Article and Find Full Text PDFAlthough statistical practices to evaluate intervention effects in single-case experimental design (SCEDs) have gained prominence in recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations, both of which contribute to trend in the data. The question that arises is whether in SCED data that show trend, there is indeterminacy between estimating slope and autocorrelation, because both contribute to trend, and the data have a limited number of observations.
View Article and Find Full Text PDFMeta-analysis has been used to examine the effectiveness of childhood obesity prevention efforts, yet traditional conventional meta-analytic methods restrict the kinds of studies included, and either narrowly define mechanisms and agents of change, or examine the effectiveness of whole interventions as opposed to the specific actions that comprise interventions. Taxonomic meta-analytic methods widen the aperture of what can be included in a meta-analysis data set, allowing for inclusion of many types of interventions and study designs. The National Collaborative on Childhood Obesity Research Childhood Obesity Evidence Base (COEB) project focuses on interventions intended to prevent childhood obesity in children 2-5 years old who have an outcome measure of BMI.
View Article and Find Full Text PDFTo evaluate the efficacy of childhood obesity interventions and conduct a taxonomy of intervention components that are most effective in changing obesity-related health outcomes in children 2-5 years of age. Comprehensive searches located 51 studies from 18,335 unique records. Eligible studies: (1) assessed children aged 2-5, living in the United States; (2) evaluated an intervention to improve weight status; (3) identified a same-aged comparison group; (4) measured BMI; and (5) were available between January 2005 and August 2019.
View Article and Find Full Text PDFThere is a great need for analytic techniques that allow for the synthesis of learning across seemingly idiosyncratic interventions. The primary objective of this paper is to introduce taxonomic meta-analysis and explain how it is different from conventional meta-analysis. Conventional meta-analysis has previously been used to examine the effectiveness of childhood obesity prevention interventions.
View Article and Find Full Text PDFIn this study, we reanalyze recent empirical research on replication from a meta-analytic perspective. We argue that there are different ways to define "replication failure," and that analyses can focus on exploring variation among replication studies or assess whether their results contradict the findings of the original study. We apply this framework to a set of psychological findings that have been replicated and assess the sensitivity of these analyses.
View Article and Find Full Text PDFImmediacy is one of the necessary criteria to show strong evidence of treatment effect in single-case experimental designs (SCEDs). However, with the exception of Natesan and Hedges (2017), no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change points between the baseline and treatment phases as unknown.
View Article and Find Full Text PDFIn this rejoinder, we discuss Mathur and VanderWeele's response to our article, "Statistical Analyses for Studying Replication: Meta-Analytic Perspectives," which appears in this current issue. We attempt to clarify a point of confusion regarding the inclusion of an original study in an analysis of replication, and the potential impact of publication bias. We then discuss the methods used by Mathur and VanderWeele to conduct an alternative analysis of the Gambler's Fallacy example from our article.
View Article and Find Full Text PDFSystematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory.
View Article and Find Full Text PDFPsychol Methods
October 2019
Formal empirical assessments of replication have recently become more prominent in several areas of science, including psychology. These assessments have used different statistical approaches to determine if a finding has been replicated. The purpose of this article is to provide several alternative conceptual frameworks that lead to different statistical analyses to test hypotheses about replication.
View Article and Find Full Text PDFBehav Res Methods
February 2018
Equation (26) is formatted incorrectly in the pdf version. It should appear as follows.
View Article and Find Full Text PDFObjective/study Question: To estimate and compare sample average treatment effects (SATE) and population average treatment effects (PATE) of a resident duty hour policy change on patient and resident outcomes using data from the Flexibility in Duty Hour Requirements for Surgical Trainees Trial ("FIRST Trial").
Data Sources/study Setting: Secondary data from the National Surgical Quality Improvement Program and the FIRST Trial (2014-2015).
Study Design: The FIRST Trial was a cluster-randomized pragmatic noninferiority trial designed to evaluate the effects of a resident work hour policy change to permit greater flexibility in scheduling on patient and resident outcomes.
Psychol Methods
December 2017
Although immediacy is one of the necessary criteria to show strong evidence of a causal relation in single case designs (SCDs), no inferential statistical tool is currently used to demonstrate it. We propose a Bayesian unknown change-point model to investigate and quantify immediacy in SCD analysis. Unlike visual analysis that considers only 3-5 observations in consecutive phases to investigate immediacy, this model considers all data points.
View Article and Find Full Text PDFPsychol Methods
March 2017
I discuss how methods that adjust for publication selection involve implicit or explicit selection models. Such models describe the relation between the studies conducted and those actually observed. I argue that the evaluation of selection models should include an evaluation of the plausibility of the empirical implications of that model.
View Article and Find Full Text PDFA task force of experts was convened by the American Psychological Association (APA) to update the knowledge and policy about the impact of violent video game use on potential adverse outcomes. This APA Task Force on Media Violence examined the existing literature, including the meta-analyses in the field, since the last APA report on media violence in 2005. Because the most recent meta-analyses were published in 2010 and reflected work through 2009, the task force conducted a search of the published studies from 2009-2013.
View Article and Find Full Text PDFWhen we speak about heterogeneity in a meta-analysis, our intent is usually to understand the substantive implications of the heterogeneity. If an intervention yields a mean effect size of 50 points, we want to know if the effect size in different populations varies from 40 to 60, or from 10 to 90, because this speaks to the potential utility of the intervention. While there is a common belief that the I statistic provides this information, it actually does not.
View Article and Find Full Text PDF