A novel evolutionary approach for Explainable Artificial Intelligence is presented: the "Evolved Explanations" model (EvEx). This methodology combines Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images.
View Article and Find Full Text PDFProblem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented.
Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue.
Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios.
An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images.
View Article and Find Full Text PDF