Network models are gaining popularity as a way to estimate direct effects among psychological variables and investigate the structure of constructs. A key feature of network estimation is determining which edges are likely to be non-zero. In psychology, this is commonly achieved through the graphical lasso regularization method that estimates a precision matrix of Gaussian variables using an -penalty to push small values to zero. A tuning parameter, , controls the sparsity of the network. There are many methods to select , which can lead to vastly different graphs. The most common approach in psychological network applications is to minimize the extended Bayesian information criterion, but the consistency of this method for model selection has primarily been examined in high dimensional settings (i.e., <) that are uncommon in psychology. Further, there is some evidence that alternative selection methods may have superior performance. Here, using simulation, we compare four different methods for selecting , including the stability approach to regularization selection (StARS), K-fold cross-validation, the rotation information criterion (RIC), and the extended Bayesian information criterion (EBIC). Our results demonstrate that penalty parameter selection should be made based on data characteristics and the inferential goal (e.g., to increase sensitivity versus to avoid false positives). We end with recommendations for selecting the penalty parameter when using the graphical lasso.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/00273171.2019.1672516 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!