Climate change will modify forest pest outbreak characteristics, although there are disagreements regarding the specifics of these changes. A large part of this variability may be attributed to model specifications. As a case study, we developed a consensus model predicting spruce budworm (SBW, Choristoneura fumiferana [Clem.]) outbreak duration using two different predictor data sets and six different correlative methods. The model was used to project outbreak duration and the uncertainty associated with using different data sets and correlative methods (=model-specification uncertainty) for 2011-2040, 2041-2070 and 2071-2100, according to three forcing scenarios (RCP 2.6, RCP 4.5 and RCP 8.5). The consensus model showed very high explanatory power and low bias. The model projected a more important northward shift and decrease in outbreak duration under the RCP 8.5 scenario. However, variation in single-model projections increases with time, making future projections highly uncertain. Notably, the magnitude of the shifts in northward expansion, overall outbreak duration and the patterns of outbreaks duration at the southern edge were highly variable according to the predictor data set and correlative method used. We also demonstrated that variation in forcing scenarios contributed only slightly to the uncertainty of model projections compared with the two sources of model-specification uncertainty. Our approach helped to quantify model-specification uncertainty in future forest pest outbreak characteristics. It may contribute to sounder decision-making by acknowledging the limits of the projections and help to identify areas where model-specification uncertainty is high. As such, we further stress that this uncertainty should be strongly considered when making forest management plans, notably by adopting adaptive management strategies so as to reduce future risks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/gcb.13142 | DOI Listing |
Objectives: Papers reporting value sets typically only report the standard errors (SEs) around each estimated coefficient in value set models. This is important information but does not help those building cost effectiveness models, who need to know the uncertainty around the values of health states in order to conduct sensitivity analyses. This paper's aim is to demonstrate how SEs around HRQoL values can be calculated, using the example of the UK EQ-5D-3L value set.
View Article and Find Full Text PDFBMC Med Res Methodol
August 2024
Center for Care Delivery and Outcomes Research, Minneapolis VA Health Care System, One Veterans Drive (152), Minneapolis, MN, 55417, USA.
Background: Dimension reduction methods do not always reduce their underlying indicators to a single composite score. Furthermore, such methods are usually based on optimality criteria that require discarding some information. We suggest, under some conditions, to use the joint probability density function (joint pdf or JPD) of p-dimensional random variable (the p indicators), as an index or a composite score.
View Article and Find Full Text PDFPharm Stat
November 2024
Department of Biostatistics, Gilead Sciences, Foster City, California, USA.
Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
January 2024
Conventional frequentist learning is known to yield poorly calibrated models that fail to reliably quantify the uncertainty of their decisions. Bayesian learning can improve calibration, but formal guarantees apply only under restrictive assumptions about correct model specification. Conformal prediction (CP) offers a general framework for the design of set predictors with calibration guarantees that hold regardless of the underlying data generation mechanism.
View Article and Find Full Text PDFEur J Investig Health Psychol Educ
July 2022
IPN- Leibniz Institute for Science and Mathematics Education, Olshausenstraße 62, 24118 Kiel, Germany.
In educational large-scale assessment (LSA) studies such as PISA, item response theory (IRT) scaling models summarize students' performance on cognitive test items across countries. This article investigates the impact of different factors in model specifications for the PISA 2018 mathematics study. The diverse options of the model specification also firm under the labels multiverse analysis or specification curve analysis in the social sciences.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!