Background: Intensive care unit (ICU) readmissions are associated with mortality and poor outcomes. To improve discharge decisions, machine learning (ML) could help to identify patients at risk of ICU readmission. However, as many models are black boxes, dangerous properties may remain unnoticed. Widely used explanation methods also have inherent limitations. Few studies are evaluating inherently interpretable ML models for health care and involve clinicians in inspecting the trained model.

Methods: An inherently interpretable model for the prediction of 3 day ICU readmission was developed. We used explainable boosting machines that learn modular risk functions and which have already been shown to be suitable for the health care domain. We created a retrospective cohort of 15,589 ICU stays and 169 variables collected between 2006 and 2019 from the University Hospital Münster. A team of physicians inspected the model, checked the plausibility of each risk function, and removed problematic ones. We collected qualitative feedback during this process and analyzed the reasons for removing risk functions. The performance of the final explainable boosting machine was compared with a validated clinical score and three commonly used ML models. External validation was performed on the widely used Medical Information Mart for Intensive Care version IV database.

Results: The developed explainable boosting machine used 67 features and showed an area under the precision-recall curve of 0.119 ± 0.020 and an area under the receiver operating characteristic curve of 0.680 ± 0.025. It performed on par with state-of-the-art gradient boosting machines (0.123 ± 0.016, 0.665 ± 0.036) and outperformed the Simplified Acute Physiology Score II (0.084 ± 0.025, 0.607 ± 0.019), logistic regression (0.092 ± 0.026, 0.587 ± 0.016), and recurrent neural networks (0.095 ± 0.008, 0.594 ± 0.027). External validation confirmed that explainable boosting machines (0.221 ± 0.023, 0.760 ± 0.010) performed similarly to gradient boosting machines (0.232 ± 0.029, 0.772 ± 0.018). Evaluation of the model inspection showed that explainable boosting machines can be useful to detect and remove problematic risk functions.

Conclusions: We developed an inherently interpretable ML model for 3 day ICU readmission prediction that reached the state-of-the-art performance of black box models. Our results suggest that for low- to medium-dimensional datasets that are common in health care, it is feasible to develop ML models that allow a high level of human control without sacrificing performance.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9445989PMC
http://dx.doi.org/10.3389/fmed.2022.960296DOI Listing

Publication Analysis

Top Keywords

explainable boosting
24
boosting machines
24
intensive care
12
icu readmission
12
inherently interpretable
12
health care
12
care unit
8
readmission prediction
8
boosting
8
interpretable model
8

Similar Publications

This study explores the thermal conductivity and viscosity of water-based nanofluids containing silicon dioxide, graphene oxide, titanium dioxide, and their hybrids across various concentrations (0 to 1 vol%) and temperatures (30 to 60 °C). The nanofluids, characterized using multiple methods, exhibited increased viscosity and thermal conductivity compared to water, with hybrid nanofluids showing superior performance. Graphene oxide nanofluids displayed the highest thermal conductivity and viscosity ratios, with increases of 52% and 177% at 60 °C and 30 °C, respectively, for a concentration of 1 vol% compared to base fluid.

View Article and Find Full Text PDF

Improved aquila optimizer for swarm-based solutions to complex engineering problems.

Sci Rep

December 2024

Department of Computer Science, College of Computer and Information Sciences, King Saud University, 11543, Riyadh, Saudi Arabia.

The traditional optimization approaches suffer from certain problems like getting stuck in local optima, low speed, susceptibility to local optima, and searching unknown search spaces, thus requiring reliance on single-based solutions. Herein, an Improved Aquila Optimizer (IAO) is proposed, which is a unique meta-heuristic optimization method motivated by the hunting behavior of Aquila. An improved version of Aquila optimizer seeks to increase effectiveness and productivity.

View Article and Find Full Text PDF

The work being presented now combines severe gradient boosting with Shapley values, a thriving merger within the field of explainable artificial intelligence. We also use a genetic algorithm to analyse the HDAC1 inhibitory activity of a broad pool of 1274 molecules experimentally reported for HDAC1 inhibition. We conduct this analysis to ascertain the HDAC1 inhibitory activity of these molecules.

View Article and Find Full Text PDF

The phenomenon of neural plasticity pertains to the intrinsic capacity of neurons to undergo structural and functional reconfiguration through learning and experiential interaction with the environment. These changes could manifest themselves not only as a consequence of various life experiences but also following therapeutic interventions, including the application of noninvasive brain stimulation (NIBS) and psychotherapy. As standalone therapies, both NIBS and psychotherapy have demonstrated their efficacy in the amelioration of psychiatric disorders' symptoms, with a certain variability in terms of effect sizes and duration.

View Article and Find Full Text PDF

With the beginning of the COVID-19 pandemic, wastewater-based epidemiology (WBE), which according to Larsen et al. (2021), describes the science of linking pathogens and chemicals found in wastewater to population-level health, received an enormous boost worldwide. The basic procedure in WBE is to analyse pathogen concentrations and to relate these measurements to cases from clinical data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!