The multi-modal knowledge graph completion (MMKGC) task aims to automatically mine the missing factual knowledge from the existing multi-modal knowledge graphs (MMKGs), which is crucial in advancing cross-modal learning and reasoning. However, few methods consider the adverse effects caused by different missing modal information in the model learning process. To address the above challenges, we innovatively propose a odal quilibrium elational raph framwork, called . By constructing three modal-specific directed relational graph attention networks, MERGE can implicitly represent missing modal information for entities by aggregating the modal embeddings from neighboring nodes. Subsequently, a fusion approach based on low-rank tensor decomposition is adopted to align multiple modal features in both the explicit structural level and the implicit semantic level, utilizing the structural information inherent in the original knowledge graphs, which enhances the interpretability of the fused features. Furthermore, we introduce a novel interpolation re-ranking strategy to adjust the importance of modalities during inference while preserving the semantic integrity of each modality. The proposed framework has been validated on four publicly available datasets, and the experimental results have demonstrated the effectiveness and robustness of our method in the MMKGC task.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11644511 | PMC |
http://dx.doi.org/10.3390/s24237605 | DOI Listing |
Alzheimers Dement
December 2024
Department of Neurobiology, Care Sciences and Society, Center for Alzheimer Research, Karolinska Institutet, Stockholm, Sweden.
Background: Detecting early stages of Alzheimer's disease (AD) remains a crucial yet complex challenge. While recent interest has surged in detecting biomarkers linked with the disease preclinical phase, a comprehensive understanding of the concomitant peripheral biological pathways before the potential disease onset is necessary. We aim to explore the associations between the 18F-MK6240 tau PET tracer with plasma inflammatory markers, other AT(X)N biomarkers and episodic memory.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Department of Neurobiology, Care Sciences and Society, Center for Alzheimer Research, Karolinska Institutet, Stockholm, Sweden.
Background: Detecting early stages of Alzheimer's disease (AD) remains a crucial yet complex challenge. While recent interest has surged in detecting biomarkers linked with the disease preclinical phase, a comprehensive understanding of the concomitant peripheral biological pathways before the potential disease onset is necessary. We aim to explore the associations between the 18F-MK6240 tau PET tracer with plasma inflammatory markers, other AT(X)N biomarkers and episodic memory.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Penn State University College of Medicine, Hershey, PA, USA.
Background: AD prevention and early interventions require tools for evaluation of people during aging for diagnosis and prognosis of AD conversion. Since AD is a complicated continuum of neurodegenerative processes, developing of such tools have been difficult because it needs longitudinal and multimodal data which are often complicated and incomplete. To address this challenge, we are developing AI4AD framework using ADNI data.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
University of Alabama, Birmingham, AL, USA.
Background: Black/African Americans in the Deep South have been subjected to social segregation, discrimination, and other forms of systematic injustices that continue to negatively impact this population's social determinants of health (SDoH). Healthy People 2030 has outlined a framework describing how adverse SDoH are associated with health inequities including higher rates of Alzheimer's disease and related dementias (ADRD). Historically, it has been challenging to recruit citizens from this region to participate in brain aging-related research studies.
View Article and Find Full Text PDFNeural Netw
December 2024
School of Computer and Electronic Information, Guangxi University, University Road, Nanning, 530004, Guangxi, China. Electronic address:
Vision-language navigation (VLN) is a challenging task that requires agents to capture the correlation between different modalities from redundant information according to instructions, and then make sequential decisions on visual scenes and text instructions in the action space. Recent research has focused on extracting visual features and enhancing text knowledge, ignoring the potential bias in multi-modal data and the problem of spurious correlations between vision and text. Therefore, this paper studies the relationship structure between multi-modal data from the perspective of causality and weakens the potential correlation between different modalities through cross-modal causality reasoning.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!