The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM [1], and non-gradient-based approaches, such as Eigen-CAM [2], are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset [3] and discusses the limitations of these methods for explaining model decisions to data scientists.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3233/SHTI230416 | DOI Listing |
Background: Skin cancer poses a significant global health threat, with early detection being essential for successful treatment. While deep learning algorithms have greatly enhanced the categorization of skin lesions, the black-box nature of many models limits interpretability, posing challenges for dermatologists.
Methods: To address these limitations, SkinSage XAI utilizes advanced explainable artificial intelligence (XAI) techniques for skin lesion categorization.
J Imaging
December 2024
PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, 10129 Turin, Italy.
Skin cancer is among the most prevalent cancers globally, emphasizing the need for early detection and accurate diagnosis to improve outcomes. Traditional diagnostic methods, based on visual examination, are subjective, time-intensive, and require specialized expertise. Current artificial intelligence (AI) approaches for skin cancer detection face challenges such as computational inefficiency, lack of interpretability, and reliance on standalone CNN architectures.
View Article and Find Full Text PDFJ Med Internet Res
December 2024
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
Background: Wearable technologies have become increasingly prominent in health care. However, intricate machine learning and deep learning algorithms often lead to the development of "black box" models, which lack transparency and comprehensibility for medical professionals and end users. In this context, the integration of explainable artificial intelligence (XAI) has emerged as a crucial solution.
View Article and Find Full Text PDFHeliyon
December 2024
Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliu, Yunlin, 64002, Taiwan, ROC.
The quality of vegetables and fruits are judged by their visual features. Misclassification of fruits and vegetables lead to a financial loss. To prevent the loss, superstores need to classify fruits and vegetables in terms of size, color and shape.
View Article and Find Full Text PDFJMIR Dermatol
December 2024
K.E.M. Hospital, Mumbai, India.
Background: Thus far, considerable research has been focused on classifying a lesion as benign or malignant. However, there is a requirement for quick depth estimation of a lesion for the accurate clinical staging of the lesion. The lesion could be malignant and quickly grow beneath the skin.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!