We propose an attention-based method that aggregates local image features to a subject-level representation for predicting disease severity. In contrast to classical deep learning that requires a fixed dimensional input, our method operates on a of image patches; hence it can accommodate variable length input image without image resizing. The model learns a clinically interpretable subject-level representation that is reflective of the disease severity. Our model consists of three mutually dependent modules which regulate each other: (1) a network that learns a fixed-length representation from local features and maps them to disease severity; (2) an mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a network that encourages the diversity of the local latent features. The generative term ensures that the attention weights are non-degenerate while maintaining the relevance of the local regions to the disease severity. We train our model end-to-end in the context of a large-scale lung CT study of Chronic Obstructive Pulmonary Disease (COPD). Our model gives state-of-the art performance in predicting clinical measures of severity for COPD.The distribution of the attention provides the regional relevance of lung tissue to the clinical measurements.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6422035PMC
http://dx.doi.org/10.1007/978-3-030-00928-1_57DOI Listing

Publication Analysis

Top Keywords

disease severity
16
image patches
8
subject-level representation
8
image
5
disease
5
severity
5
subject2vec generative-discriminative
4
generative-discriminative approach
4
approach set
4
set image
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!