Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9922805PMC
http://dx.doi.org/10.1016/j.mex.2023.102009DOI Listing

Publication Analysis

Top Keywords

multi-modal medical
16
medical images
12
deep neural
8
medical image
8
dnn decisions
8
decisions multi-modal
8
perturbation-based methods
8
estimate feature
8
multi-modal
6
methods
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!