Bayesian learners in gradient boosting for linear mixed models.

Int J Biostat

Chair of Spatial Data Science and Statistical Learning, Georg-August-Unversität Göttingen, Göttingen, Germany.

Published: May 2024

Selection of relevant fixed and random effects without prior choices made from possibly insufficient theory is important in mixed models. Inference with current boosting techniques suffers from biased estimates of random effects and the inflexibility of random effects selection. This paper proposes a new inference method "BayesBoost" that integrates a Bayesian learner into gradient boosting with simultaneous estimation and selection of fixed and random effects in linear mixed models. The method introduces a novel selection strategy for random effects, which allows for computationally fast selection of random slopes even in high-dimensional data structures. Additionally, the new method not only overcomes the shortcomings of Bayesian inference in giving precise and unambiguous guidelines for the selection of covariates by benefiting from boosting techniques, but also provides Bayesian ways to construct estimators for the precision of parameters such as variance components or credible intervals, which are not available in conventional boosting frameworks. The effectiveness of the new approach can be observed via simulation and in a real-world application.

Download full-text PDF

Source
http://dx.doi.org/10.1515/ijb-2022-0029DOI Listing

Publication Analysis

Top Keywords

random effects
20
mixed models
12
gradient boosting
8
linear mixed
8
fixed random
8
boosting techniques
8
selection
6
random
6
boosting
5
effects
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!