Publications by authors named "Angelika M Stefan"

A fundamental part of experimental design is to determine the sample size of a study. However, sparse information about population parameters and effect sizes before data collection renders effective sample size planning challenging. Specifically, sparse information may lead research designs to be based on inaccurate a priori assumptions, causing studies to use resources inefficiently or to produce inconclusive results.

View Article and Find Full Text PDF

With the recent development of easy-to-use tools for Bayesian analysis, psychologists have started to embrace Bayesian hierarchical modeling. Bayesian hierarchical models provide an intuitive account of inter- and intraindividual variability and are particularly suited for the evaluation of repeated-measures designs. Here, we provide guidance for model specification and interpretation in Bayesian hierarchical modeling and describe common pitfalls that can arise in the process of model fitting and evaluation.

View Article and Find Full Text PDF

In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions.

View Article and Find Full Text PDF

In many research fields, the widespread use of questionable research practices has jeopardized the credibility of scientific results. One of the most prominent questionable research practices is -hacking. Typically, -hacking is defined as a compound of strategies targeted at rendering non-significant hypothesis testing results significant.

View Article and Find Full Text PDF

Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice.

View Article and Find Full Text PDF

In a sequential hypothesis test, the analyst checks at multiple steps during data collection whether sufficient evidence has accrued to make a decision about the tested hypotheses. As soon as sufficient information has been obtained, data collection is terminated. Here, we compare two sequential hypothesis testing procedures that have recently been proposed for use in psychological research: Sequential Probability Ratio Test (SPRT; Psychological Methods, 25(2), 206-226, 2020) and the Sequential Bayes Factor Test (SBFT; Psychological Methods, 22(2), 322-339, 2017).

View Article and Find Full Text PDF
Article Synopsis
  • The Bayesian statistical framework utilizes prior distributions to reflect pre-existing knowledge about parameter values, which significantly impact the results of analyses.
  • Prior elicitation is a method to derive these distributions based on expert input, but it involves several critical decisions that can influence the outcomes.
  • Researchers should carefully navigate the elicitation process, considering setup, the elicitation method, and how to combine inputs from different experts, as different choices can lead to varying priors from the same expert group.
View Article and Find Full Text PDF

Longitudinal studies are the gold standard for research on time-dependent phenomena in the social sciences. However, they often entail high costs due to multiple measurement occasions and a long overall study duration. It is therefore useful to optimize these design factors while maintaining a high informativeness of the design.

View Article and Find Full Text PDF

Well-designed experiments are likely to yield compelling evidence with efficient sample sizes. Bayes Factor Design Analysis (BFDA) is a recently developed methodology that allows researchers to balance the informativeness and efficiency of their experiment (Schönbrodt & Wagenmakers, Psychonomic Bulletin & Review, 25(1), 128-142 2018). With BFDA, researchers can control the rate of misleading evidence but, in addition, they can plan for a target strength of evidence.

View Article and Find Full Text PDF