The resting-state functional magnetic resonance imaging (rs-fMRI) faithfully reflects the brain activities and thus provides a promising tool for autism spectrum disorder (ASD) classification. Up to now, graph convolutional networks (GCNs) have been successfully applied in rs-fMRI based ASD classification. However, most of these methods were developed based on functional connectivities (FCs) that only reflect low-level correlation between brain regions, without integrating both high-level discriminative knowledge and phenotypic information into classification. Besides, they suffered from the overfitting problem caused by insufficient training samples. To this end, we propose a novel contrastive multi-view composite GCN (CMV-CGCN) for ASD classification using both FCs and HOFCs. Specifically, a pair of graphs are constructed based on the FC and HOFC features of the subjects, respectively, and they share the phenotypic information in the graph edges. A novel contrastive multi-view learning method is proposed based on the consistent representation of both views. A contribution learning mechanism is further incorporated, encouraging the FC and HOFC features of different subjects to have various contribution in the contrastive multi-view learning. The proposed CMV-CGCN is evaluated on 613 subjects (including 286 ASD patients and 327 NCs) from the Autism Brain Imaging Data Exchange (ABIDE). We demonstrate the performance of the method for ASD classification, which yields an accuracy of 75.20% and an area under the curve (AUC) of 0.7338. Experimental results show that our proposed method outperforms state-of-the-art methods on the ABIDE database.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2022.3232104DOI Listing

Publication Analysis

Top Keywords

contrastive multi-view
16
asd classification
16
multi-view composite
8
graph convolutional
8
convolutional networks
8
contribution learning
8
autism spectrum
8
spectrum disorder
8
novel contrastive
8
hofc features
8

Similar Publications

MFC-ACL: Multi-view fusion clustering with attentive contrastive learning.

Neural Netw

December 2024

College of Automation, Chongqing University of Posts and Telecommunications, Nan'an District, 400065, Chongqing, China. Electronic address:

Multi-view clustering can better handle high-dimensional data by combining information from multiple views, which is important in big data mining. However, the existing models which simply perform feature fusion after feature extraction for individual views, mostly fails to capture the holistic attribute information of multi-view data due to ignoring the significant disparities among views, which seriously affects the performance of multi-view clustering. In this paper, inspired by the attention mechanism, an approach called Multi-View Fusion Clustering with Attentive Contrastive Learning (MFC-ACL) is proposed to tackle these issues.

View Article and Find Full Text PDF

A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer.

Comput Med Imaging Graph

December 2024

Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, Liaoning 110122, China. Electronic address:

Objective: This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.

Methods: The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC).

View Article and Find Full Text PDF

AMCFCN: attentive multi-view contrastive fusion clustering net.

PeerJ Comput Sci

March 2024

College of Electronic and Information Engineering, Wuyi University, Jiangmen, Guangdong, China.

Advances in deep learning have propelled the evolution of multi-view clustering techniques, which strive to obtain a view-common representation from multi-view datasets. However, the contemporary multi-view clustering community confronts two prominent challenges. One is that view-specific representations lack guarantees to reduce noise introduction, and another is that the fusion process compromises view-specific representations, resulting in the inability to capture efficient information from multi-view data.

View Article and Find Full Text PDF

S-PLM: Structure-Aware Protein Language Model via Contrastive Learning Between Sequence and Structure.

Adv Sci (Weinh)

December 2024

Department of Electrical Engineering and Computer Science and Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, 65211, USA.

Proteins play an essential role in various biological and engineering processes. Large protein language models (PLMs) present excellent potential to reshape protein research by accelerating the determination of protein functions and the design of proteins with the desired functions. The prediction and design capacity of PLMs relies on the representation gained from the protein sequences.

View Article and Find Full Text PDF

Domain generalization for mammographic image analysis with contrastive learning.

Comput Biol Med

December 2024

Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China. Electronic address:

The deep learning technique has been shown to be effectively addressed several image analysis tasks in the computer-aided diagnosis scheme for mammography. The training of an efficacious deep learning model requires large data with diverse styles and qualities. The diversity of data often comes from the use of various scanners of vendors.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!