A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1057
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3175
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Composite attention mechanism network for deep contrastive multi-view clustering. | LitMetric

Composite attention mechanism network for deep contrastive multi-view clustering.

Neural Netw

School of Computer Science, Guangdong University of Science and Technology, Dongguan, 523083, China. Electronic address:

Published: August 2024

Contrastive learning-based deep multi-view clustering methods have become a mainstream solution for unlabeled multi-view data. These methods usually utilize a basic structure that combines autoencoder, contrastive learning, or/and MLP projectors to generate more representative latent representations for the final clustering stage. However, existing deep contrastive multi-view clustering ignores two key points: (i) the latent representations projecting from one or more layers of MLP or new representations directly obtained from autoencoder fail to mine inherent relationship inner-view or cross-views; (ii) more existing frameworks only employ a one or dual-contrastive learning module, i.e., view- or/and category-oriented, which may result in the lack of communication between latent representations and clustering assignments. This paper proposes a new composite attention framework for contrastive multi-view clustering to address the above two challenges. Our method learns latent representations utilizing composite attention structure, i.e., Hierarchical Transformer for each view and Shared Attention for all views, rather than simple MLP. As a result, the learned representations can simultaneously preserve important features inside the view and balance the contributions across views. In addition, we add a new communication loss in our new dual contrastive framework. The common semantics will be brought into clustering assignments by pushing clustering assignments closer to the fused latent representations. Therefore, our method will provide a higher quality of clustering assignments for the segmentation problem of unlabeled multi-view data. The extensive experiments on several real data demonstrate that the proposed method can achieve superior performance over many state-of-the-art clustering algorithms, especially the significant improvement of an average of 10% on datasets Caltech and its subsets according to accuracy.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106361DOI Listing

Publication Analysis

Top Keywords

latent representations
20
multi-view clustering
16
clustering assignments
16
composite attention
12
contrastive multi-view
12
clustering
10
deep contrastive
8
unlabeled multi-view
8
multi-view data
8
representations
7

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!