An Explainable Artificial Intelligence Predictor for Early Detection of Sepsis.

Crit Care Med

The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, China.

Published: November 2020

Objectives: Early detection of sepsis is critical in clinical practice since each hour of delayed treatment has been associated with an increase in mortality due to irreversible organ damage. This study aimed to develop an explainable artificial intelligence model for early predicting sepsis by analyzing the electronic health record data from ICU provided by the PhysioNet/Computing in Cardiology Challenge 2019.

Design: Retrospective observational study.

Setting: We developed our model on the shared ICUs publicly data and verified on the full hidden populations for challenge scoring.

Patients: Public database included 40,336 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (hospital system A) and Emory University Hospital (hospital system B). A total of 24,819 patients from hospital systems A, B, and C (an unidentified hospital system) were sequestered as full hidden test sets.

Interventions: None.

Measurements And Main Results: A total of 168 features were extracted on hourly basis. Explainable artificial intelligence sepsis predictor model was trained to predict sepsis in real time. Impact of each feature on hourly sepsis prediction was explored in-depth to show the interpretability. The algorithm demonstrated the final clinical utility score of 0.364 in this challenge when tested on the full hidden test sets, and the scores on three separate test sets were 0.430, 0.422, and -0.048, respectively.

Conclusions: Explainable artificial intelligence sepsis predictor model achieves superior performance for predicting sepsis risk in a real-time way and provides interpretable information for understanding sepsis risk in ICU.

Download full-text PDF

Source
http://dx.doi.org/10.1097/CCM.0000000000004550DOI Listing

Publication Analysis

Top Keywords

explainable artificial
16
artificial intelligence
16
full hidden
12
hospital system
12
sepsis
9
early detection
8
detection sepsis
8
predicting sepsis
8
electronic health
8
hidden test
8

Similar Publications

Rationalizing Predictions of Isoform-Selective Phosphoinositide 3-Kinase Inhibitors Using MolAnchor Analysis.

J Chem Inf Model

January 2025

Department of Life Science Informatics and Data Science, B-IT, LIMES Program Unit Chemical Biology and Medicinal Chemistry, Rheinische Friedrich-Wilhelms-Universität, Friedrich-Hirzebruch-Allee 5/6, Bonn D-53115, Germany.

Explaining the predictions of machine learning models is of critical importance for integrating predictive modeling in drug discovery projects. We have generated a test system for predicting isoform selectivity of phosphoinositide 3-kinase (PI3K) inhibitors and systematically analyzed correct predictions of selective inhibitors using a new methodology termed MolAnchor, which is based on the "anchors" concept from explainable artificial intelligence. The approach is designed to generate chemically intuitive explanations of compound predictions.

View Article and Find Full Text PDF

Protocol to infer off-target effects of drugs on cellular signaling using interactome-based deep learning.

STAR Protoc

January 2025

Department of Cell and Molecular Biology, SciLifeLab, Karolinska Institutet, 171 77 Stockholm, Sweden. Electronic address:

Drugs that target specific proteins often have off-target effects. We present a protocol using artificial neural networks to model cellular transcriptional responses to drugs, aiming to understand their mechanisms of action. We detail steps for predicting transcriptional activities, inferring drug-target interactions, and explaining the off-target mechanism of action.

View Article and Find Full Text PDF

Jewel beetles pose significant threats to forestry, and effective traps are needed to monitor and manage them. Green traps often catch more beetles, but purple traps catch a greater proportion of females. Understanding the function and mechanism of this behavior can provide a rationale for trap optimization.

View Article and Find Full Text PDF

We aimed to develop and evaluate Explainable Artificial Intelligence (XAI) for fetal ultrasound using actionable concepts as feedback to end-users, using a prospective cross-center, multi-level approach. We developed, implemented, and tested a deep-learning model for fetal growth scans using both retrospective and prospective data. We used a modified Progressive Concept Bottleneck Model with pre-established clinical concepts as explanations (feedback on image optimization and presence of anatomical landmarks) as well as segmentations (outlining anatomical landmarks).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!