A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.

Sensors (Basel)

AI Center, Tunghai University, No. 1727, Section 4, Taiwan Blvd, Xitun District, Taichung 407224, Taiwan.

Published: October 2022

The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient's conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human-machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9609212PMC
http://dx.doi.org/10.3390/s22208068DOI Listing

Publication Analysis

Top Keywords

xai
12
survey medical
8
explainable xai
8
xai progress
8
scoring system
8
medical domain
8
medical xai
8
xai methods
8
medical field
8
explainable solutions
8

Similar Publications

Hepatocellular carcinoma (HCC) remains a global health challenge with high mortality rates, largely due to late diagnosis and suboptimal efficacy of current therapies. With the imperative need for more reliable, non-invasive diagnostic tools and novel therapeutic strategies, this study focuses on the discovery and application of novel genetic biomarkers for HCC using explainable artificial intelligence (XAI). Despite advances in HCC research, current biomarkers like Alpha-fetoprotein (AFP) exhibit limitations in sensitivity and specificity, necessitating a shift towards more precise and reliable markers.

View Article and Find Full Text PDF

The development of deep learning algorithms has transformed medical image analysis, especially in brain tumor recognition. This research introduces a robust automatic microbrain tumor identification method utilizing the VGG16 deep learning model. Microscopy magnetic resonance imaging (MMRI) scans extract detailed features, providing multi-modal insights.

View Article and Find Full Text PDF

Artificial intelligence (AI) is a promising approach to identify new antimicrobial compounds in diverse microbial species. Here we developed an AI-based, explainable deep learning model, EvoGradient, that predicts the potency of antimicrobial peptides (AMPs) and virtually modifies peptide sequences to produce more potent AMPs, akin to in silico directed evolution. We applied this model to peptides encoded in low-abundance human oral bacteria, resulting in the virtual evolution of 32 peptides into potent AMPs.

View Article and Find Full Text PDF

Drug discovery and development is a challenging and time-consuming process. Laboratory experiments conducted on Vidarabine showed IC 6.97 µg∕mL, 25.

View Article and Find Full Text PDF

ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.

Comput Biol Med

January 2025

Department of Creative Technologies, Air University, Islamabad, 44000, Pakistan. Electronic address:

Background And Objective: Diabetic Retinopathy (DR) is a serious diabetes complication that can cause blindness if not diagnosed in its early stages. Manual diagnosis by ophthalmologists is labor-intensive and time-consuming, particularly in overburdened healthcare systems. This highlights the need for automated, accurate, and personalized machine learning approaches for early DR detection and treatment.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!