An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images. This is achieved using the "locally-interpretable model-agnostic explanations" methodology. Two publicly-available convolutional neural networks trained on the Patch Camelyon Benchmark are analyzed. Three common segmentation algorithms are compared for superpixel generation, and a fourth simpler parameter-free segmentation algorithm is proposed. The main characteristics of the explanations are discussed, as well as the key patterns identified in true positive predictions. The results are compared to medical annotations and literature and suggest that the CNN predictions follow at least some aspects of human expert knowledge.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6651753PMC
http://dx.doi.org/10.3390/s19132969DOI Listing

Publication Analysis

Top Keywords

convolutional neural
8
local interpretable
4
interpretable model-agnostic
4
model-agnostic explanations
4
explanations classification
4
classification lymph
4
lymph node
4
node metastases
4
metastases application
4
application explainable
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!