Humans effortlessly interpret images by parsing them into part-whole hierarchies; deep learning excels in learning multi-level feature spaces, but they often lack explicit coding of part-whole relations, a prominent property of medical imaging. To overcome this limitation, we introduce Adam-v2, a new self-supervised learning framework extending Adam [79] by explicitly incorporating part-whole hierarchies into its learning objectives through three key branches: (1) Localizability, acquiring discriminative representations to distinguish different anatomical patterns; (2) Composability, learning each anatomical structure in a parts-to-whole manner; and (3) Decomposability, comprehending each anatomical structure in a whole-to-parts manner. Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance over large-scale medical models and existing SSL methods across diverse downstream tasks. The higher generality and robustness of Adam-v2's representations originate from its explicit construction of hierarchies for distinct anatomical structures from unlabeled medical images. Adam-v2 preserves a semantic balance of anatomical diversity and harmony in its embedding, yielding representations that are both generic and semantically meaningful, yet overlooked in existing SSL methods. All code and pretrained models are available at GitHub.com/JLiangLab/Eden.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11636527PMC
http://dx.doi.org/10.1109/cvpr52733.2024.01071DOI Listing

Publication Analysis

Top Keywords

part-whole hierarchies
12
anatomical structure
8
existing ssl
8
ssl methods
8
learning
6
anatomical
5
representing part-whole
4
hierarchies
4
hierarchies foundation
4
foundation models
4

Similar Publications

Humans effortlessly interpret images by parsing them into part-whole hierarchies; deep learning excels in learning multi-level feature spaces, but they often lack explicit coding of part-whole relations, a prominent property of medical imaging. To overcome this limitation, we introduce Adam-v2, a new self-supervised learning framework extending Adam [79] by explicitly incorporating part-whole hierarchies into its learning objectives through three key branches: (1) Localizability, acquiring discriminative representations to distinguish different anatomical patterns; (2) Composability, learning each anatomical structure in a parts-to-whole manner; and (3) Decomposability, comprehending each anatomical structure in a whole-to-parts manner. Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance over large-scale medical models and existing SSL methods across diverse downstream tasks.

View Article and Find Full Text PDF

Panoptic Part Segmentation (PPS) unifies panoptic and part segmentation into one task. Previous works utilize separate approaches to handle things, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework, Panoptic-PartFormer.

View Article and Find Full Text PDF

A sensory-motor theory of the neocortex.

Nat Neurosci

July 2024

Center for Neurotechnology, University of Washington, Seattle, WA, USA.

Recent neurophysiological and neuroanatomical studies suggest a close interaction between sensory and motor processes across the neocortex. Here, I propose that the neocortex implements active predictive coding (APC): each cortical area estimates both latent sensory states and actions (including potentially abstract actions internal to the cortex), and the cortex as a whole predicts the consequences of actions at multiple hierarchical levels. Feedback from higher areas modulates the dynamics of state and action networks in lower areas.

View Article and Find Full Text PDF

Capsule networks (CapsNets) aim to parse images into a hierarchy of objects, parts, and their relationships using a two-step process involving part-whole transformation and hierarchical component routing. However, this hierarchical relationship modeling is computationally expensive, which has limited the wider use of CapsNet despite its potential advantages. The current state of CapsNet models primarily focuses on comparing their performance with capsule baselines, falling short of achieving the same level of proficiency as deep convolutional neural network (CNN) variants in intricate tasks.

View Article and Find Full Text PDF
Article Synopsis
  • The segmentation of the left ventricle (LV) in echocardiographic images is crucial for accurately diagnosing and treating cardiovascular diseases, as it helps assess important cardiac metrics like volume and ejection fraction.
  • While traditional manual methods of LV segmentation can be tedious and error-prone, deep learning techniques like convolutional neural networks (CNNs) have been popular; however, they have limitations such as loss of spatial information and a need for large datasets.
  • This study introduces SegCaps, a new optimized capsule-based network for LV segmentation, which outperformed the standard 2D-UNet by achieving a higher accuracy with significantly fewer parameters, facilitating more precise cardiac evaluations in clinical settings.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!