Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Background: The American College of Surgeons NSQIP risk calculator (RC) uses regression to make predictions for fourteen 30-day surgical outcomes. While this approach provides accurate (discrimination and calibration) risk estimates, they might be improved by machine learning (ML). To investigate this possibility, accuracy for regression-based risk estimates were compared to estimates from an extreme gradient boosting (XGB)-ML algorithm.
Study Design: A cohort of 5,020,713 million NSQIP patient records was randomly divided into 80% for model construction and 20% for validation. Risk predictions using regression and XGB-ML were made for 13 RC binary 30-day surgical complications and one continuous outcome (length of stay [LOS]). For the binary outcomes, discrimination was evaluated using the area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC), and calibration was evaluated using Hosmer-Lemeshow statistics. Mean squared error and a calibration curve analog were evaluated for the continuous LOS outcome.
Results: For every binary outcome, discrimination (AUROC and AUPRC) was slightly greater for XGB-ML than for regression (mean [across the outcomes] AUROC was 0.8299 vs 0.8251, and mean AUPRC was 0.1558 vs 0.1476, for XGB-ML and regression, respectively). For each outcome, miscalibration was greater (larger Hosmer-Lemeshow values) with regression; there was statistically significant miscalibration for all regression-based estimates, but only for 4 of 13 when XGB-ML was used. For LOS, mean squared error was lower for XGB-ML.
Conclusions: XGB-ML provided more accurate risk estimates than regression in terms of discrimination and calibration. Differences in calibration between regression and XGB-ML were of substantial magnitude and support transitioning the RC to XGB-ML.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/XCS.0000000000000556 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!