Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer's disease (AD). Adhering to PRISMA and Kitchenham's guidelines, we identified 23 relevant articles and investigated these frameworks' prospective capabilities, benefits, and challenges in depth. The results emphasise XAI's crucial role in strengthening the trustworthiness of AI-based AD predictions. This review aims to provide fundamental capabilities of LIME and SHAP XAI frameworks in enhancing fidelity within clinical decision support systems for AD prognosis.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10997568PMC
http://dx.doi.org/10.1186/s40708-024-00222-1DOI Listing

Publication Analysis

Top Keywords

lime shap
12
artificial intelligence
8
systematic review
8
review application
8
application lime
8
alzheimer's disease
8
interpreting artificial
4
intelligence models
4
models systematic
4
lime
4

Similar Publications

Machine Learning-Based Alzheimer's Disease Stage Diagnosis Utilizing Blood Gene Expression and Clinical Data: A Comparative Investigation.

Diagnostics (Basel)

January 2025

Department of Computer Science and Engineering, Faculty of Engineering and Technology, Technology Campus (Peenya Campus), Ramaiah University of Applied Sciences, Bengaluru 560058, India.

This study presents a comparative analysis of the multistage diagnosis of Alzheimer's disease (AD), including mild cognitive impairment (MCI), utilizing two distinct types of biomarkers: blood gene expression and clinical biomarker samples. Both of these samples, obtained from participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), were independently analyzed utilizing machine learning (ML)-based multiclassifiers. This study applied novel machine learning-based data augmentation techniques to gene expression profile data that are high-dimensional, low-sample-size (HDLSS) and inherently highly imbalanced.

View Article and Find Full Text PDF

A novel hybrid ViT-LSTM model with explainable AI for brain stroke detection and classification in CT images: A case study of Rajshahi region.

Comput Biol Med

January 2025

Department of Biomedical Engineering, Islamic University, Kushtia, 7003, Bangladesh; Bio-Imaging Research Laboratory, Islamic University, Kushtia, 7003, Bangladesh. Electronic address:

Computed tomography (CT) scans play a key role in the diagnosis of stroke, a leading cause of morbidity and mortality worldwide. However, interpreting these scans is often challenging, necessitating automated solutions for timely and accurate diagnosis. This research proposed a novel hybrid model that integrates a Vision Transformer (ViT) and a Long Short Term Memory (LSTM) to accurately detect and classify stroke characteristics using CT images.

View Article and Find Full Text PDF

Climate change poses significant challenges to global food security by altering precipitation patterns and increasing the frequency of extreme weather events such as droughts, heatwaves, and floods. These phenomena directly affect agricultural productivity, leading to lower crop yields and economic losses for farmers. This study leverages Artificial Intelligence (AI) and Explainable Artificial Intelligence (XAI) techniques to predict crop yields and assess the impacts of climate change on agriculture, providing a novel approach to understanding complex interactions between climatic and agronomic factors.

View Article and Find Full Text PDF

Purpose: Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians' assessment of glioma aggressiveness at diagnosis.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!