Introduction: Prediction models are increasingly being used to guide clinical decision making in primary care. There is a lack of evidence exploring the views of patients and general practitioners (GPs) in primary care around their use and implementation. We aimed to better understand the perspectives of GPs and people with lived experience of depression around the use of prediction models and communication of risk in primary care.
View Article and Find Full Text PDFBackground: Relapse of depression is common and contributes to the overall associated morbidity and burden. We lack evidence-based tools to estimate an individual's risk of relapse after treatment in primary care, which may help us more effectively target relapse prevention.
Objective: The objective was to develop and validate a prognostic model to predict risk of relapse of depression in primary care.
Background: Fetal growth restriction is associated with perinatal morbidity and mortality. Early identification of women having at-risk fetuses can reduce perinatal adverse outcomes.
Objectives: To assess the predictive performance of existing models predicting fetal growth restriction and birthweight, and if needed, to develop and validate new multivariable models using individual participant data.
Objective: To predict birth weight at various potential gestational ages of delivery based on data routinely available at the first antenatal visit.
Design: Individual participant data meta-analysis.
Data Sources: Individual participant data of four cohorts (237 228 pregnancies) from the International Prediction of Pregnancy Complications (IPPIC) network dataset.
Purpose: To develop and validate prediction models for the risk of future work absence and level of presenteeism, in adults seeking primary healthcare with musculoskeletal disorders (MSD).
Methods: Six studies from the West-Midlands/Northwest regions of England, recruiting adults consulting primary care with MSD were included for model development and internal-external cross-validation (IECV). The primary outcome was any work absence within 6 months of their consultation.
Background: Falls are common in older adults and can devastate personal independence through injury such as fracture and fear of future falls. Methods to identify people for falls prevention interventions are currently limited, with high risks of bias in published prediction models. We have developed and externally validated the eFalls prediction model using routinely collected primary care electronic health records (EHR) to predict risk of emergency department attendance/hospitalisation with fall or fracture within 1 year.
View Article and Find Full Text PDFExternal validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness.
View Article and Find Full Text PDFEvaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance.
View Article and Find Full Text PDFBackground: Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare.
View Article and Find Full Text PDFWe have previously proposed temporal recalibration to account for trends in survival over time to improve the calibration of predictions from prognostic models for new patients. This involves first estimating the predictor effects using data from all individuals (full dataset) and then re-estimating the baseline using a subset of the most recent data whilst constraining the predictor effects to remain the same. In this article, we demonstrate how temporal recalibration can be applied in competing risk settings by recalibrating each cause-specific (or subdistribution) hazard model separately.
View Article and Find Full Text PDFBackground: Antihypertensives reduce the risk of cardiovascular disease but are also associated with harms including acute kidney injury (AKI). Few data exist to guide clinical decision making regarding these risks.
Aim: To develop a prediction model estimating the risk of AKI in people potentially indicated for antihypertensive treatment.
Objectives: To assess improvement in the completeness of reporting coronavirus (COVID-19) prediction models after the peer review process.
Study Design And Setting: Studies included in a living systematic review of COVID-19 prediction models, with both preprint and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) reporting guidelines between pre-print and published manuscripts.
Objective: To develop and externally validate the STRAtifying Treatments In the multi-morbid Frail elderlY (STRATIFY)-Falls clinical prediction model to identify the risk of hospital admission or death from a fall in patients with an indication for antihypertensive treatment.
Design: Retrospective cohort study.
Setting: Primary care data from electronic health records contained within the UK Clinical Practice Research Datalink (CPRD).
Previous articles in Statistics in Medicine describe how to calculate the sample size required for external validation of prediction models with continuous and binary outcomes. The minimum sample size criteria aim to ensure precise estimation of key measures of a model's predictive performance, including measures of calibration, discrimination, and net benefit. Here, we extend the sample size guidance to prediction models with a time-to-event (survival) outcome, to cover external validation in datasets containing censoring.
View Article and Find Full Text PDFBackground: Identification of biomarkers that predict severe Crohn's disease is an urgent unmet research need, but existing research is piecemeal and haphazard.
Objective: To identify biomarkers that are potentially able to predict the development of subsequent severe Crohn's disease.
Design: This was a prognostic systematic review with meta-analysis reserved for those potential predictors with sufficient existing research (defined as five or more primary studies).
In prediction model research, external validation is needed to examine an existing model's performance using data independent to that for model development. Current external validation studies often suffer from small sample sizes and consequently imprecise predictive performance estimates. To address this, we propose how to determine the minimum sample size needed for a new external validation study of a prediction model for a binary outcome.
View Article and Find Full Text PDFIntroduction: Sample size "rules-of-thumb" for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach.
View Article and Find Full Text PDFObjective: To examine the association between antihypertensive treatment and specific adverse events.
Design: Systematic review and meta-analysis.
Eligibility Criteria: Randomised controlled trials of adults receiving antihypertensives compared with placebo or no treatment, more antihypertensive drugs compared with fewer antihypertensive drugs, or higher blood pressure targets compared with lower targets.
Objectives: When developing a clinical prediction model, penalization techniques are recommended to address overfitting, as they shrink predictor effect estimates toward the null and reduce mean-square prediction error in new individuals. However, shrinkage and penalty terms ('tuning parameters') are estimated with uncertainty from the development data set. We examined the magnitude of this uncertainty and the subsequent impact on prediction model performance.
View Article and Find Full Text PDFClinical prediction models provide individualized outcome predictions to inform patient counseling and clinical decision making. External validation is the process of examining a prediction model's performance in data independent to that used for model development. Current external validation studies often suffer from small sample sizes, and subsequently imprecise estimates of a model's predictive performance.
View Article and Find Full Text PDF