The field of radiology imaging has experienced a remarkable increase in using of deep learning (DL) algorithms to support diagnostic and treatment decisions. This rise has led to the development of Explainable AI (XAI) system to improve the transparency and trust of complex DL methods. However, XAI systems face challenges in gaining acceptance within the healthcare sector, mainly due to technical hurdles in utilizing these systems in practice and the lack of human-centered evaluation/validation. In this study, we focus on visual XAI systems applied to DL-enabled diagnostic system in chest radiography. In particular, we conduct a user study to evaluate two prominent visual XAI techniques from the human perspective. To this end, we created two clinical scenarios for diagnosing pneumonia and COVID-19 using DL techniques applied to chest X-ray and CT scans. The achieved accuracy rates were 90% for pneumonia and 98% for COVID-19. Subsequently, we employed two well-known XAI methods, Grad-CAM (Gradient-weighted Class Activation Mapping) and LIME (Local Interpretable Model-agnostic Explanations), to generate visual explanations elucidating the AI decision-making process. The visual explainability results were shared through a user study, undergoing evaluation by medical professionals in terms of clinical relevance, coherency, and user trust. In general, participants expressed a positive perception of the use of XAI systems in chest radiography. However, there was a noticeable lack of awareness regarding their value and practical aspects. Regarding preferences, Grad-CAM showed superior performance over LIME in terms of coherency and trust, although concerns were raised about its clinical usability. Our findings highlight key user-driven explainability requirements, emphasizing the importance of multi-modal explainability and the necessity to increase awareness of XAI systems among medical practitioners. Inclusive design was also identified as a crucial need to ensure better alignment of these systems with user needs.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11463756 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0308758 | PLOS |
Digit Health
January 2025
Institute for Medical Informatics and Biometry, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.
Objective: The application of artificial intelligence (AI)-based clinical decision support systems (CDSS) in the healthcare domain is still limited. End-users' difficulty understanding how the outputs of opaque black AI models are generated contributes to this. It is still unknown which explanations are best presented to end users and how to design the interfaces they are presented in (explanation user interface, XUI).
View Article and Find Full Text PDFMethodsX
June 2025
Department of Networking & Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, India.
Forecasting student performance with precision in the educational space is paramount for creating tailor-made interventions capable to boost learning effectiveness. It means most of the traditional student performance prediction models have difficulty in dealing with multi-dimensional academic data, can cause sub-optimal classification and generate a simple generalized insight. To address these challenges of the existing system, in this research we propose a new model Multi-dimensional Student Performance Prediction Model (MSPP) that is inspired by advanced data preprocessing and feature engineering techniques using deep learning.
View Article and Find Full Text PDFDiagnostics (Basel)
January 2025
Aerospace Engineering Department and Interdisciplinary Research Center for Smart Mobility and Logistics, and Interdisciplinary Research Center Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia.
Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception.
View Article and Find Full Text PDFDiagnostics (Basel)
January 2025
Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey.
Electroencephalography (EEG) signal-based machine learning models are among the most cost-effective methods for information retrieval. In this context, we aimed to investigate the cortical activities of psychotic criminal subjects by deploying an explainable feature engineering (XFE) model using an EEG psychotic criminal dataset. In this study, a new EEG psychotic criminal dataset was curated, containing EEG signals from psychotic criminal and control groups.
View Article and Find Full Text PDFExpert Syst Appl
October 2024
Department of Cell Systems and Anatomy, University of Texas Health Science Center at San Antonio, TX, United States.
Hepatocellular carcinoma (HCC) remains a global health challenge with high mortality rates, largely due to late diagnosis and suboptimal efficacy of current therapies. With the imperative need for more reliable, non-invasive diagnostic tools and novel therapeutic strategies, this study focuses on the discovery and application of novel genetic biomarkers for HCC using explainable artificial intelligence (XAI). Despite advances in HCC research, current biomarkers like Alpha-fetoprotein (AFP) exhibit limitations in sensitivity and specificity, necessitating a shift towards more precise and reliable markers.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!