AI Article Synopsis

  • - The study focuses on improving prediction of hospital-acquired pressure injuries (HAPIs) in intensive care units (ICUs) by developing an artificial intelligence (AI) risk-assessment model that addresses the limitations of traditional tools, which often miss critical ICU-specific data.
  • - Researchers used clinical data from over 28,000 ICU patients to create an ensemble AI model that achieved a predictive accuracy of 0.80 and developed an explainer dashboard to visualize the model's findings, making it easier for clinicians to understand.
  • - The resulting AI risk-assessment system aims to provide transparent and interpretable insights for healthcare providers, potentially leading to better preventive measures for HAPIs in ICU settings.

Article Abstract

Background: Hospital-acquired pressure injuries (HAPIs) have a major impact on patient outcomes in intensive care units (ICUs). Effective prevention relies on early and accurate risk assessment. Traditional risk-assessment tools, such as the Braden Scale, often fail to capture ICU-specific factors, limiting their predictive accuracy. Although artificial intelligence models offer improved accuracy, their "black box" nature poses a barrier to clinical adoption.

Objective: To develop an artificial intelligence-based HAPI risk-assessment model enhanced with an explainable artificial intelligence dashboard to improve interpretability at both the global and individual patient levels.

Methods: An explainable artificial intelligence approach was used to analyze ICU patient data from the Medical Information Mart for Intensive Care. Predictor variables were restricted to the first 48 hours after ICU admission. Various machine-learning algorithms were evaluated, culminating in an ensemble "super learner" model. The model's performance was quantified using the area under the receiver operating characteristic curve through 5-fold cross-validation. An explainer dashboard was developed (using synthetic data for patient privacy), featuring interactive visualizations for in-depth model interpretation at the global and local levels.

Results: The final sample comprised 28 395 patients with a 4.9% incidence of HAPIs. The ensemble super learner model performed well (area under curve = 0.80). The explainer dashboard provided global and patient-level interactive visualizations of model predictions, showing each variable's influence on the risk-assessment outcome.

Conclusion: The model and its dashboard provide clinicians with a transparent, interpretable artificial intelligence-based risk-assessment system for HAPIs that may enable more effective and timely preventive interventions.

Download full-text PDF

Source
http://dx.doi.org/10.4037/ajcc2024856DOI Listing

Publication Analysis

Top Keywords

artificial intelligence
16
explainable artificial
12
intensive care
8
artificial intelligence-based
8
explainer dashboard
8
interactive visualizations
8
model
6
artificial
5
intelligence
4
intelligence early
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!