AI Article Synopsis

  • The study explores the growing use of deep learning algorithms in radiology for diagnostic support, emphasizing the need for Explainable AI (XAI) to enhance transparency and trust among healthcare professionals.
  • A user study evaluated two visual XAI techniques (Grad-CAM and LIME) in diagnosing pneumonia and COVID-19 from chest images, achieving high accuracy rates of 90% and 98%, respectively.
  • Despite generally positive perceptions of XAI systems, participants showed limited awareness of their practical benefits, with Grad-CAM being favored for coherency and trust, though concerns about its usability in clinical settings were noted.

Article Abstract

The field of radiology imaging has experienced a remarkable increase in using of deep learning (DL) algorithms to support diagnostic and treatment decisions. This rise has led to the development of Explainable AI (XAI) system to improve the transparency and trust of complex DL methods. However, XAI systems face challenges in gaining acceptance within the healthcare sector, mainly due to technical hurdles in utilizing these systems in practice and the lack of human-centered evaluation/validation. In this study, we focus on visual XAI systems applied to DL-enabled diagnostic system in chest radiography. In particular, we conduct a user study to evaluate two prominent visual XAI techniques from the human perspective. To this end, we created two clinical scenarios for diagnosing pneumonia and COVID-19 using DL techniques applied to chest X-ray and CT scans. The achieved accuracy rates were 90% for pneumonia and 98% for COVID-19. Subsequently, we employed two well-known XAI methods, Grad-CAM (Gradient-weighted Class Activation Mapping) and LIME (Local Interpretable Model-agnostic Explanations), to generate visual explanations elucidating the AI decision-making process. The visual explainability results were shared through a user study, undergoing evaluation by medical professionals in terms of clinical relevance, coherency, and user trust. In general, participants expressed a positive perception of the use of XAI systems in chest radiography. However, there was a noticeable lack of awareness regarding their value and practical aspects. Regarding preferences, Grad-CAM showed superior performance over LIME in terms of coherency and trust, although concerns were raised about its clinical usability. Our findings highlight key user-driven explainability requirements, emphasizing the importance of multi-modal explainability and the necessity to increase awareness of XAI systems among medical practitioners. Inclusive design was also identified as a crucial need to ensure better alignment of these systems with user needs.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11463756PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0308758PLOS

Publication Analysis

Top Keywords

xai systems
16
xai
8
xai techniques
8
radiology imaging
8
visual xai
8
chest radiography
8
user study
8
systems
6
evaluating explainable
4
explainable artificial
4

Similar Publications

Objective: The application of artificial intelligence (AI)-based clinical decision support systems (CDSS) in the healthcare domain is still limited. End-users' difficulty understanding how the outputs of opaque black AI models are generated contributes to this. It is still unknown which explanations are best presented to end users and how to design the interfaces they are presented in (explanation user interface, XUI).

View Article and Find Full Text PDF

Forecasting student performance with precision in the educational space is paramount for creating tailor-made interventions capable to boost learning effectiveness. It means most of the traditional student performance prediction models have difficulty in dealing with multi-dimensional academic data, can cause sub-optimal classification and generate a simple generalized insight. To address these challenges of the existing system, in this research we propose a new model Multi-dimensional Student Performance Prediction Model (MSPP) that is inspired by advanced data preprocessing and feature engineering techniques using deep learning.

View Article and Find Full Text PDF

Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.

Diagnostics (Basel)

January 2025

Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, and Interdisciplinary Research Center Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia.

Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception.

View Article and Find Full Text PDF

Zipper Pattern: An Investigation into Psychotic Criminal Detection Using EEG Signals.

Diagnostics (Basel)

January 2025

Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey.

Electroencephalography (EEG) signal-based machine learning models are among the most cost-effective methods for information retrieval. In this context, we aimed to investigate the cortical activities of psychotic criminal subjects by deploying an explainable feature engineering (XFE) model using an EEG psychotic criminal dataset. In this study, a new EEG psychotic criminal dataset was curated, containing EEG signals from psychotic criminal and control groups.

View Article and Find Full Text PDF

Hepatocellular carcinoma (HCC) remains a global health challenge with high mortality rates, largely due to late diagnosis and suboptimal efficacy of current therapies. With the imperative need for more reliable, non-invasive diagnostic tools and novel therapeutic strategies, this study focuses on the discovery and application of novel genetic biomarkers for HCC using explainable artificial intelligence (XAI). Despite advances in HCC research, current biomarkers like Alpha-fetoprotein (AFP) exhibit limitations in sensitivity and specificity, necessitating a shift towards more precise and reliable markers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!