AI Article Synopsis

  • Machine Learning (ML) and Deep Learning (DL) models outperform traditional methods in healthcare predictions but face challenges with low interpretability, which hampers their practical use.
  • Explainable Artificial Intelligence (XAI) methods are proposed to improve understanding of these models; however, there is currently limited evaluation on their effectiveness.
  • A scoping review of 76 studies from 3220 publications found a rise in XAI usage with predominant methods being SHAP, PD Plots, and LIME, but highlighted significant gaps in method reporting and the need for thorough evaluation before applying these techniques in healthcare.

Article Abstract

Machine Learning (ML) and Deep Learning (DL) models show potential in surpassing traditional methods including generalised linear models for healthcare predictions, particularly with large, complex datasets. However, low interpretability hinders practical implementation. To address this, Explainable Artificial Intelligence (XAI) methods are proposed, but a comprehensive evaluation of their effectiveness is currently limited. The aim of this scoping review is to critically appraise the application of XAI methods in ML/DL models using Electronic Health Record (EHR) data. In accordance with PRISMA scoping review guidelines, the study searched PUBMED and OVID/MEDLINE (including EMBASE) for publications related to tabular EHR data that employed ML/DL models with XAI. Out of 3220 identified publications, 76 were included. The selected publications published between February 2017 and June 2023, demonstrated an exponential increase over time. Extreme Gradient Boosting and Random Forest models were the most frequently used ML/DL methods, with 51 and 50 publications, respectively. Among XAI methods, Shapley Additive Explanations (SHAP) was predominant in 63 out of 76 publications, followed by partial dependence plots (PDPs) in 11 publications, and Locally Interpretable Model-Agnostic Explanations (LIME) in 8 publications. Despite the growing adoption of XAI methods, their applications varied widely and lacked critical evaluation. This review identifies the increasing use of XAI in tabular EHR research and highlights a deficiency in the reporting of methods and a lack of critical appraisal of validity and robustness. The study emphasises the need for further evaluation of XAI methods and underscores the importance of cautious implementation and interpretation in healthcare settings.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528818PMC
http://dx.doi.org/10.1177/20552076241272657DOI Listing

Publication Analysis

Top Keywords

xai methods
20
scoping review
12
explainable artificial
8
artificial intelligence
8
xai
8
intelligence xai
8
electronic health
8
health record
8
methods
8
ml/dl models
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!