AI Article Synopsis

  • Cross-modal 3D shape retrieval aims to measure similarity between different 3D representations, but existing methods struggle with single-modal limitations and modality gaps.
  • To address these challenges, the proposed Heterogeneous Dynamic Graph Representation (HDGR) network uses dynamic graphs to capture relationships among diverse 3D objects, enhancing feature extraction through techniques like Dynamic Graph Convolution.
  • Extensive testing on multiple datasets shows that HDGR significantly outperforms existing methods in both cross-modal and intra-modal retrieval tasks, proving its robustness against label noise as well.

Article Abstract

Cross-modal 3D shape retrieval is a crucial and widely applied task in the field of 3D vision. Its goal is to construct retrieval representations capable of measuring the similarity between instances of different 3D modalities. However, existing methods face challenges due to the performance bottlenecks of single-modal representation extractors and the modality gap across 3D modalities. To tackle these issues, we propose a Heterogeneous Dynamic Graph Representation (HDGR) network, which incorporates context-dependent dynamic relations within a heterogeneous framework. By capturing correlations among diverse 3D objects, HDGR overcomes the limitations of ambiguous representations obtained solely from instances. Within the context of varying mini-batches, dynamic graphs are constructed to capture proximal intra-modal relations, and dynamic bipartite graphs represent implicit cross-modal relations, effectively addressing the two challenges above. Subsequently, message passing and aggregation are performed using Dynamic Graph Convolution (DGConv) and Dynamic Bipartite Graph Convolution (DBConv), enhancing features through heterogeneous dynamic relation learning. Finally, intra-modal, cross-modal, and self-transformed features are redistributed and integrated into a heterogeneous dynamic representation for cross-modal 3D shape retrieval. HDGR establishes a stable, context-enhanced, structure-aware 3D shape representation by capturing heterogeneous inter-object relationships and adapting to varying contextual dynamics. Extensive experiments conducted on the ModelNet10, ModelNet40, and real-world ABO datasets demonstrate the state-of-the-art performance of HDGR in cross-modal and intra-modal retrieval tasks. Moreover, under the supervision of robust loss functions, HDGR achieves remarkable cross-modal retrieval against label noise on the 3D MNIST dataset. The comprehensive experimental results highlight the effectiveness and efficiency of HDGR on cross-modal 3D shape retrieval.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3524440DOI Listing

Publication Analysis

Top Keywords

cross-modal shape
16
shape retrieval
16
heterogeneous dynamic
16
dynamic graph
12
dynamic
9
cross-modal
8
graph representation
8
representation cross-modal
8
dynamic bipartite
8
graph convolution
8

Similar Publications

Research on brain plasticity, particularly in the context of deafness, consistently emphasizes the reorganization of the auditory cortex. But to what extent do all individuals with deafness show the same level of reorganization? To address this question, we examined the individual differences in functional connectivity (FC) from the deprived auditory cortex. Our findings demonstrate remarkable differentiation between individuals deriving from the absence of shared auditory experiences, resulting in heightened FC variability among deaf individuals, compared to more consistent FC in the hearing group.

View Article and Find Full Text PDF

Cross-modal correspondences between audition and olfaction have received relatively less attention compared to other modality pairs. This study expands on previous work regarding timbre-aroma correspondences by examining the semantic mediation hypothesis, according to which cross-modal correspondences may be partly explained by the existence of common semantic qualities. In a behavioral experiment, 26 musically trained participants rated 26 complex synthetic tones and 12 aromatic stimuli across two separate blocks using a common set of semantic scales.

View Article and Find Full Text PDF

Audiovisual cross-modal correspondences (CMCs) refer to the brain's inherent ability to subconsciously connect auditory and visual information. These correspondences reveal essential aspects of multisensory perception and influence behavioral performance, enhancing reaction times and accuracy. However, the impact of different types of CMCs-arising from statistical co-occurrences or shaped by semantic associations-on information processing and decision-making remains underexplored.

View Article and Find Full Text PDF
Article Synopsis
  • Cross-modal 3D shape retrieval aims to measure similarity between different 3D representations, but existing methods struggle with single-modal limitations and modality gaps.
  • To address these challenges, the proposed Heterogeneous Dynamic Graph Representation (HDGR) network uses dynamic graphs to capture relationships among diverse 3D objects, enhancing feature extraction through techniques like Dynamic Graph Convolution.
  • Extensive testing on multiple datasets shows that HDGR significantly outperforms existing methods in both cross-modal and intra-modal retrieval tasks, proving its robustness against label noise as well.
View Article and Find Full Text PDF

Combining multisensory cues is fundamental for perception and action and reflected by two frequently studied phenomena: multisensory integration and sensory recalibration. In the context of audio-visual spatial signals, these phenomena are exemplified by the ventriloquism effect and its aftereffect. The ventriloquism effect occurs when the perceived location of a sound is biased by a concurrent visual stimulus, while the aftereffect manifests as a recalibration of perceived sound location after exposure to spatially discrepant stimuli.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!