The identification of an empirically adequate theoretical construct requires determining whether a theoretically predicted effect is sufficiently similar to an observed effect. To this end, we propose a simple similarity measure, describe its application in different research designs, and use computer simulations to estimate the necessary sample size for a given observed effect. As our main example, we apply this measure to recent meta-analytical research on precognition.
View Article and Find Full Text PDFIn psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the (RPS) as a superior methodology.
View Article and Find Full Text PDFAlthough legal contexts are subject to biased reasoning and decision making, to identify and test debiasing techniques has largely remained an open task. We report on experimentally deploying the technique "giving reasons pro et contra" with professional ( N = 239) and lay judges ( N = 372) at Swedish municipal courts. Using a mock legal scenario, participants assessed the relevance of an eyewitness's previous conviction for his credibility.
View Article and Find Full Text PDFBefore replication becomes mainstream, the potential for generating theoretical knowledge better be clear. Replicating statistically significant nonrandom data shows that an original study made a discovery; replicating a specified theoretical effect shows that an original study corroborated a theory. Yet only in the latter case is replication a necessary, sound, and worthwhile strategy.
View Article and Find Full Text PDFThe gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015).
View Article and Find Full Text PDF