In their recent paper, Forbes et al. (2019; FWMK) evaluate the replicability of network models in two studies. They identify considerable replicability issues, concluding that "current 'state-of-the-art' methods in the psychopathology network literature […] are not well-suited to analyzing the structure of the relationships between individual symptoms". Such strong claims require strong evidence, which the authors do not provide. FWMK identify low replicability by analyzing point estimates of networks; contrast low replicability with results of two statistical tests that indicate higher replicability, and conclude that these tests are problematic. We make four points. First, statistical tests are superior to the visual inspection of point estimates, because tests take into account sampling variability. Second, FWMK misinterpret the statistical tests in several important ways. Third, FWMK did not follow established recommendations when estimating networks in their first study, underestimating replicability. Fourth, FWMK draw conclusions about methodology, which does not follow from investigations of data, and requires investigations of methodology. Overall, we show that the "poor replicability "observed by FWMK occurs due to sampling variability and use of suboptimal methods. We conclude by discussing important recent simulation work that guides researchers to use models appropriate for their data, such as nonregularized estimation routines.

Download full-text PDF

Source
http://dx.doi.org/10.1080/00273171.2020.1746903DOI Listing

Publication Analysis

Top Keywords

statistical tests
12
forbes et al
8
et al 2019
8
low replicability
8
point estimates
8
sampling variability
8
replicability
7
fwmk
6
tests
5
estimating parameter
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!