Entity alignment is a crucial task in knowledge graphs, aiming to match corresponding entities from different knowledge graphs. Due to the scarcity of pre-aligned entities in real-world scenarios, research focused on unsupervised entity alignment has become more popular. However, current unsupervised entity alignment methods suffer from a lack of informative entity guidance, hindering their ability to accurately predict challenging entities with similar names and structures. To solve these problems, we present an unsupervised multi-view contrastive learning framework with an attention-based reranking strategy for entity alignment, named AR-Align. In AR-Align, two kinds of data augmentation methods are employed to provide a complementary view for neighborhood and attribute, respectively. Next, a multi-view contrastive learning method is introduced to reduce the semantic gap between different views of the augmented entities. Moreover, an attention-based reranking strategy is proposed to rerank the hard entities through calculating their weighted sum of embedding similarities on different structures. Experimental results indicate that AR-Align outperforms most both supervised and unsupervised state-of-the-art methods on three benchmark datasets.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106583 | DOI Listing |
Neural Netw
May 2025
Big Data Institute, Central South University, Changsha, Hunan 410083, China. Electronic address:
Extrapolation reasoning in temporal knowledge graphs (TKGs) aims at predicting future facts based on historical data, and finds extensive application in diverse real-world scenarios. Existing TKG reasoning methods primarily focus on capturing the fact evolution to improve entity temporal representations, often overlooking the alignment with query semantics. More importantly, these methods fail to generate explicit inference paths, resulting in a lack of explainability.
View Article and Find Full Text PDFNeural Netw
May 2025
School of Big Data, Yunnan Agricultural University, Yunnan, 650201, China. Electronic address:
The current few-shot relational triple extraction (FS-RTE) techniques, which rely on prototype networks, have made significant progress. Nevertheless, the scarcity of data in the support set results in both intra-class and inter-class gaps in FS-RTE. Instances with restricted support sets make capturing the various features of target instances in the query set difficult, resulting in intra-class gaps.
View Article and Find Full Text PDFJ Environ Manage
March 2025
University of Tehran, Tehran, Iran. Electronic address:
This study examines key technologies in the circular economy through patent mining and expert evaluations. We identified eleven distinct technology clusters, including Smart Fluid Management Systems, Circular Chemical Processing, and Structural Design for Circularity. Using the S-curve model, we analyzed the maturity stages of these technologies, revealing a mix of mature and emerging technologies.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
February 2025
Multimodal named entity recognition (MNER) is an emerging field that aims to automatically detect named entities and classify their categories, utilizing input text and auxiliary resources such as images. While previous studies have leveraged object detectors to preprocess images and fuse textual semantics with corresponding image features, these methods often overlook the potential finer grained information within each modality and may exacerbate error propagation due to predetection. To address these issues, we propose a finer grained rank-based contrastive learning (FRCL) framework for MNER.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
November 2024
Network alignment is a fundamental problem in various domains since it can establish bridges for the same entity (i.e., anchor nodes) between different networks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!