A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Comparative performance of intensive care mortality prediction models based on manually curated versus automatically extracted electronic health record data. | LitMetric

Comparative performance of intensive care mortality prediction models based on manually curated versus automatically extracted electronic health record data.

Int J Med Inform

Department of Intensive Care Medicine, Center for Critical Care Computational Intelligence (C4I), Amsterdam Medical Data Science (AMDS), Amsterdam Cardiovascular Science (ACS), Amsterdam Institute for Infection and Immunity (AII), Amsterdam Public Health (APH), Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands.

Published: August 2024

Introduction: Benchmarking intensive care units for audit and feedback is frequently based on comparing actual mortality versus predicted mortality. Traditionally, mortality prediction models rely on a limited number of input variables and significant manual data entry and curation. Using automatically extracted electronic health record data may be a promising alternative. However, adequate data on comparative performance between these approaches is currently lacking.

Methods: The AmsterdamUMCdb intensive care database was used to construct a baseline APACHE IV in-hospital mortality model based on data typically available through manual data curation. Subsequently, new in-hospital mortality models were systematically developed and evaluated. New models differed with respect to the extent of automatic variable extraction, classification method, recalibration usage and the size of collection window.

Results: A total of 13 models were developed based on data from 5,077 admissions divided into a train (80%) and test (20%) cohort. Adding variables or extending collection windows only marginally improved discrimination and calibration. An XGBoost model using only automatically extracted variables, and therefore no acute or chronic diagnoses, was the best performing automated model with an AUC of 0.89 and a Brier score of 0.10.

Discussion: Performance of intensive care mortality prediction models based on manually curated versus automatically extracted electronic health record data is similar. Importantly, our results suggest that variables typically requiring manual curation, such as diagnosis at admission and comorbidities, may not be necessary for accurate mortality prediction. These proof-of-concept results require replication using multi-centre data.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijmedinf.2024.105477DOI Listing

Publication Analysis

Top Keywords

intensive care
16
mortality prediction
16
automatically extracted
16
prediction models
12
extracted electronic
12
electronic health
12
health record
12
record data
12
data
9
comparative performance
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!