Cross-modal 3D shape retrieval is a crucial and widely applied task in the field of 3D vision. Its goal is to construct retrieval representations capable of measuring the similarity between instances of different 3D modalities. However, existing methods face challenges due to the performance bottlenecks of single-modal representation extractors and the modality gap across 3D modalities. To tackle these issues, we propose a Heterogeneous Dynamic Graph Representation (HDGR) network, which incorporates context-dependent dynamic relations within a heterogeneous framework. By capturing correlations among diverse 3D objects, HDGR overcomes the limitations of ambiguous representations obtained solely from instances. Within the context of varying mini-batches, dynamic graphs are constructed to capture proximal intra-modal relations, and dynamic bipartite graphs represent implicit cross-modal relations, effectively addressing the two challenges above. Subsequently, message passing and aggregation are performed using Dynamic Graph Convolution (DGConv) and Dynamic Bipartite Graph Convolution (DBConv), enhancing features through heterogeneous dynamic relation learning. Finally, intra-modal, cross-modal, and self-transformed features are redistributed and integrated into a heterogeneous dynamic representation for cross-modal 3D shape retrieval. HDGR establishes a stable, context-enhanced, structure-aware 3D shape representation by capturing heterogeneous inter-object relationships and adapting to varying contextual dynamics. Extensive experiments conducted on the ModelNet10, ModelNet40, and real-world ABO datasets demonstrate the state-of-the-art performance of HDGR in cross-modal and intra-modal retrieval tasks. Moreover, under the supervision of robust loss functions, HDGR achieves remarkable cross-modal retrieval against label noise on the 3D MNIST dataset. The comprehensive experimental results highlight the effectiveness and efficiency of HDGR on cross-modal 3D shape retrieval.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2024.3524440 | DOI Listing |
Elife
March 2025
Department of Neuroscience, Georgetown University Medical Center, Washington DC, United States.
Research on brain plasticity, particularly in the context of deafness, consistently emphasizes the reorganization of the auditory cortex. But to what extent do all individuals with deafness show the same level of reorganization? To address this question, we examined the individual differences in functional connectivity (FC) from the deprived auditory cortex. Our findings demonstrate remarkable differentiation between individuals deriving from the absence of shared auditory experiences, resulting in heightened FC variability among deaf individuals, compared to more consistent FC in the hearing group.
View Article and Find Full Text PDFFront Psychol
February 2025
School of Music Studies, Aristotle University of Thessaloniki, Thessaloniki, Greece.
Cross-modal correspondences between audition and olfaction have received relatively less attention compared to other modality pairs. This study expands on previous work regarding timbre-aroma correspondences by examining the semantic mediation hypothesis, according to which cross-modal correspondences may be partly explained by the existence of common semantic qualities. In a behavioral experiment, 26 musically trained participants rated 26 complex synthetic tones and 12 aromatic stimuli across two separate blocks using a common set of semantic scales.
View Article and Find Full Text PDFFront Neurosci
February 2025
Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany.
Audiovisual cross-modal correspondences (CMCs) refer to the brain's inherent ability to subconsciously connect auditory and visual information. These correspondences reveal essential aspects of multisensory perception and influence behavioral performance, enhancing reaction times and accuracy. However, the impact of different types of CMCs-arising from statistical co-occurrences or shaped by semantic associations-on information processing and decision-making remains underexplored.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
April 2025
Eur J Neurosci
February 2025
Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany.
Combining multisensory cues is fundamental for perception and action and reflected by two frequently studied phenomena: multisensory integration and sensory recalibration. In the context of audio-visual spatial signals, these phenomena are exemplified by the ventriloquism effect and its aftereffect. The ventriloquism effect occurs when the perceived location of a sound is biased by a concurrent visual stimulus, while the aftereffect manifests as a recalibration of perceived sound location after exposure to spatially discrepant stimuli.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!