A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Using leave-one-out cross validation (LOO) in a multilevel regression and poststratification (MRP) workflow: A cautionary tale. | LitMetric

AI Article Synopsis

  • Multilevel regression and poststratification (MRP) is increasingly used for population inference, but its accuracy can be influenced by model details and validation practices are under-researched.
  • The study investigates leave-one-out cross-validation (LOO) methods, focusing on Pareto smoothed importance sampling (PSIS-LOO) and a survey-weighted alternative (WTD-PSIS-LOO), using simulations to assess their effectiveness in ranking model performance.
  • Findings reveal that while both PSIS-LOO and WTD-PSIS-LOO can identify the best and worst models, they struggle with accurately ranking models for MRP, particularly in small-area estimations, indicating a need for caution in relying solely

Article Abstract

In recent decades, multilevel regression and poststratification (MRP) has surged in popularity for population inference. However, the validity of the estimates can depend on details of the model, and there is currently little research on validation. We explore how leave-one-out cross validation (LOO) can be used to compare Bayesian models for MRP. We investigate two approximate calculations of LOO: Pareto smoothed importance sampling (PSIS-LOO) and a survey-weighted alternative (WTD-PSIS-LOO). Using two simulation designs, we examine how accurately these two criteria recover the correct ordering of model goodness at predicting population and small-area estimands. Focusing first on variable selection, we find that neither PSIS-LOO nor WTD-PSIS-LOO correctly recovers the models' order for an MRP population estimand, although both criteria correctly identify the best and worst model. When considering small-area estimation, the best model differs for different small areas, highlighting the complexity of MRP validation. When considering different priors, the models' order seems slightly better at smaller-area levels. These findings suggest that, while not terrible, PSIS-LOO-based ranking techniques may not be suitable to evaluate MRP as a method. We suggest this is due to the aggregation stage of MRP, where individual-level prediction errors average out. We validate these results by applying to the real world National Health and Nutrition Examination Survey (NHANES) data in the United States. Altogether, these results show that PSIS-LOO-based model validation tools need to be used with caution and might not convey the full story when validating MRP as a method.

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.9964DOI Listing

Publication Analysis

Top Keywords

leave-one-out cross
8
cross validation
8
validation loo
8
multilevel regression
8
regression poststratification
8
mrp
8
poststratification mrp
8
models' order
8
mrp method
8
validation
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!