Interpretability is highly desired for deep neural network-based classifiers, especially when addressing high-stake decisions in medical imaging. Commonly used post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations of a given model, leading to ambiguity about which one to choose. To address this problem, a novel decision-theory-inspired approach is investigated to establish a self-interpretable model, given a pre-trained deep binary black-box medical image classifier.
View Article and Find Full Text PDFTreatment of blood smears with Wright's stain is one of the most helpful tools in detecting white blood cell abnormalities. However, to diagnose leukocyte disorders, a clinical pathologist must perform a tedious, manual process of locating and identifying individual cells. Furthermore, the staining procedure requires considerable preparation time and clinical infrastructure, which is incompatible with point-of-care diagnosis.
View Article and Find Full Text PDFDeep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most.
View Article and Find Full Text PDFAn overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed.
View Article and Find Full Text PDF