Psychological networks in clinical populations: investigating the consequences of Berkson's bias.

Psychol Med

Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands.

Published: January 2021

Background: In clinical research, populations are often selected on the sum-score of diagnostic criteria such as symptoms. Estimating statistical models where a subset of the data is selected based on a function of the analyzed variables introduces Berkson's bias, which presents a potential threat to the validity of findings in the clinical literature. The aim of the present paper is to investigate the effect of Berkson's bias on the performance of the two most commonly used psychological network models: the Gaussian Graphical Model (GGM) for continuous and ordinal data, and the Ising Model for binary data.

Methods: In two simulation studies, we test how well the two models recover a true network structure when estimation is based on a subset of the data typically seen in clinical studies. The network is based on a dataset of 2807 patients diagnosed with major depression, and nodes in the network are items from the Hamilton Rating Scale for Depression (HRSD). The simulation studies test different scenarios by varying (1) sample size and (2) the cut-off value of the sum-score which governs the selection of participants.

Results: The results of both studies indicate that higher cut-off values are associated with worse recovery of the network structure. As expected from the Berkson's bias literature, selection reduced recovery rates by inducing negative connections between the items.

Conclusion: Our findings provide evidence that Berkson's bias is a considerable and underappreciated problem in the clinical network literature. Furthermore, we discuss potential solutions to circumvent Berkson's bias and their pitfalls.

Download full-text PDF

Source
http://dx.doi.org/10.1017/S0033291719003209DOI Listing

Publication Analysis

Top Keywords

berkson's bias
24
clinical populations
8
subset data
8
simulation studies
8
studies test
8
network structure
8
berkson's
6
bias
6
network
6
clinical
5

Similar Publications

A generalisation of the method of regression calibration and comparison with Bayesian and frequentist model averaging methods.

Sci Rep

March 2024

Department of Epidemiology and Biostatistics, School of Medicine, University of California, San Francisco, 550 16th Street, 2nd Floor, San Francisco, CA, 94143, USA.

For many cancer sites low-dose risks are not known and must be extrapolated from those observed in groups exposed at much higher levels of dose. Measurement error can substantially alter the dose-response shape and hence the extrapolated risk. Even in studies with direct measurement of low-dose exposures measurement error could be substantial in relation to the size of the dose estimates and thereby distort population risk estimates.

View Article and Find Full Text PDF
Article Synopsis
  • The Fragility Index (FI) measures how many patient outcomes must change for a clinical trial's results to lose statistical significance, but its usefulness is limited to understanding statistical significance alone.
  • Well-designed clinical trials focus on a minimum required sample size for reliability, making them inherently fragile.
  • Confidence intervals (CIs) provide a better understanding of result precision and uncertainty, suggesting that a large FI doesn't necessarily indicate strong findings; thus, emphasis should shift from FI to CI.
View Article and Find Full Text PDF

A generalisation of the method of regression calibration and comparison with Bayesian and frequentist model averaging methods.

ArXiv

March 2024

Department of Epidemiology and Biostatistics, School of Medicine, University of California, San Francisco, 550 16th Street, 2nd floor, San Francisco, CA 94143, USA.

For many cancer sites low-dose risks are not known and must be extrapolated from those observed in groups exposed at much higher levels of dose. Measurement error can substantially alter the dose-response shape and hence the extrapolated risk. Even in studies with direct measurement of low-dose exposures measurement error could be substantial in relation to the size of the dose estimates and thereby distort population risk estimates.

View Article and Find Full Text PDF

A generalisation of the method of regression calibration and comparison with the Bayesian 2-dimensional Monte Carlo method.

Res Sq

December 2023

Department of Epidemiology and Biostatistics, School of Medicine, University of California, San Francisco, 550 16 Street, 2 floor, San Francisco, CA 94143, USA.

For many cancer sites it is necessary to assess risks from low-dose exposures via extrapolation from groups exposed at moderate and high levels of dose. Measurement error can substantially alter the shape of this relationship and hence the derived population risk estimates. Even in studies with direct measurement of low-dose exposures measurement error could be substantial in relation to the size of the dose estimates and thereby distort population risk estimates.

View Article and Find Full Text PDF

Regression calibration is a popular approach for correcting biases in estimated regression parameters when exposure variables are measured with error. This approach involves building a calibration equation to estimate the value of the unknown true exposure given the error-prone measurement and other covariates. The estimated, or calibrated, exposure is then substituted for the unknown true exposure in the health outcome regression model.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!