Deep neural networks (DNNs) have achieved physician-level accuracy on many imaging-based medical diagnostic tasks, for example classification of retinal images in ophthalmology. However, their decision mechanisms are often considered impenetrable leading to a lack of trust by clinicians and patients. To alleviate this issue, a range of explanation methods have been proposed to expose the inner workings of DNNs leading to their decisions. For imaging-based tasks, this is often achieved via saliency maps. The quality of these maps are typically evaluated via perturbation analysis without experts involved. To facilitate the adoption and success of such automated systems, however, it is crucial to validate saliency maps against clinicians. In this study, we used three different network architectures and developed ensembles of DNNs to detect diabetic retinopathy and neovascular age-related macular degeneration from retinal fundus images and optical coherence tomography scans, respectively. We used a variety of explanation methods and obtained a comprehensive set of saliency maps for explaining the ensemble-based diagnostic decisions. Then, we systematically validated saliency maps against clinicians through two main analyses - a direct comparison of saliency maps with the expert annotations of disease-specific pathologies and perturbation analyses using also expert annotations as saliency maps. We found the choice of DNN architecture and explanation method to significantly influence the quality of saliency maps. Guided Backprop showed consistently good performance across disease scenarios and DNN architectures, suggesting that it provides a suitable starting point for explaining the decisions of DNNs on retinal images.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2022.102364DOI Listing

Publication Analysis

Top Keywords

saliency maps
32
maps
9
saliency
8
deep neural
8
neural networks
8
retinal images
8
explanation methods
8
maps clinicians
8
expert annotations
8
clinical validation
4

Similar Publications

Objective: Functional magnetic resonance imaging data pose significant challenges due to their inherently noisy and complex nature, making traditional statistical models less effective in capturing predictive features. While deep learning models offer superior performance through their non-linear capabilities, they often lack transparency, reducing trust in their predictions. This study introduces the Time Reversal (TR) pretraining method to address these challenges.

View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop and validate a deep-learning model for noninvasive anemia detection, hemoglobin (Hb) level estimation, and identification of anemia-related retinal features using fundus images.

Methods: The dataset included 2265 participants aged 40 years and above from a population-based study in South India. The dataset included ocular and systemic clinical parameters, dilated retinal fundus images, and hematological data such as complete blood counts and Hb concentration levels.

View Article and Find Full Text PDF

Despite decades of advancements in diagnostic MRI, 30-50% of temporal lobe epilepsy (TLE) patients remain categorized as "non-lesional" (i.e., MRI negative or MRI-) based on visual assessment by human experts.

View Article and Find Full Text PDF

Clinicians' perspectives on the use of artificial intelligence to triage MRI brain scans.

Eur J Radiol

January 2025

School of Biomedical Engineering & Imaging Sciences, King's College London, London, the United Kingdom of Great Britain and Northern Ireland; Department of Neuroradiology, King's College Hospital National Health Service Foundation Trust, London, the United Kingdom of Great Britain and Northern Ireland. Electronic address:

Artificial intelligence (AI) tools can triage radiology scans to streamline the patient pathway and also relieve clinician workload. Validated AI tools can mitigate the delays in reporting scans by flagging time-sensitive and actionable findings. In this study, we aim to investigate current stakeholder perspectives and identify obstacles to integrating AI in clinical pathways.

View Article and Find Full Text PDF

Rad4XCNN: A new agnostic method for post-hoc global explanation of CNN-derived features by means of Radiomics.

Comput Methods Programs Biomed

January 2025

Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, 90127, Italy. Electronic address:

Article Synopsis
  • Machine learning-based clinical decision support systems (CDSS) face challenges with transparency and reliability, as explainability often reduces predictive accuracy.
  • A novel method called Rad4XCNN enhances the predictive power of CNN features while maintaining interpretability through Radiomics, moving beyond traditional saliency maps.
  • In breast cancer classification tasks, Rad4XCNN demonstrates superior accuracy compared to other feature types and allows for global insights, effectively addressing the explainability-accuracy trade-off in AI models.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!