Text classification is one of the fundamental tasks in natural language processing, which requires an agent to determine the most appropriate category for input sentences. Recently, deep neural networks have achieved impressive performance in this area, especially pretrained language models (PLMs). Usually, these methods concentrate on input sentences and corresponding semantic embedding generation. However, for another essential component: labels, most existing works either treat them as meaningless one-hot vectors or use vanilla embedding methods to learn label representations along with model training, underestimating the semantic information and guidance that these labels reveal. To alleviate this problem and better exploit label information, in this article, we employ self-supervised learning (SSL) in model learning process and design a novel self-supervised relation of relation ( [Formula: see text]) classification task for label utilization from a one-hot manner perspective. Then, we propose a novel relation of relation learning network( [Formula: see text]-Net) for text classification, in which text classification and [Formula: see text] classification are treated as optimization targets. Meanwhile, triplet loss is employed to enhance the analysis of differences and connections among labels. Moreover, considering that one-hot usage is still short of exploiting label information, we incorporate external knowledge from WordNet to obtain multiaspect descriptions for label semantic learning and extend [Formula: see text]-Net to a novel description-enhanced label embedding network(DELE) from a label embedding perspective. One step further, since these fine-grained descriptions may introduce unexpected noise, we develop a mutual interaction module to select appropriate parts from input sentences and labels simultaneously based on contrastive learning (CL) for noise mitigation. Extensive experiments on different text classification tasks reveal that [Formula: see text]-Net can effectively improve the classification performance and DELE can make better use of label information and further improve the performance. As a byproduct, we have released the codes to facilitate other research.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2023.3282020DOI Listing

Publication Analysis

Top Keywords

text classification
20
label embedding
12
input sentences
12
[formula text]-net
12
description-enhanced label
8
contrastive learning
8
classification
8
classification text
8
label
8
relation relation
8

Similar Publications

Background: Late-life depression (LLD) is a heterogenous disorder related to cognitive decline and neurodegenerative processes, raising a need for the development of novel biomarkers. We sought to provide preliminary evidence for acoustic speech signatures sensitive to LLD and their relationship to depressive dimensions.

Methods: Forty patients (24 female, aged 65-82 years) were assessed with the Geriatric Depression Scale (GDS).

View Article and Find Full Text PDF

To enhance patient outcomes in pediatric cancer, a better understanding of the medical and biological risk variables is required. With the growing amount of data accessible to research in pediatric cancer, machine learning (ML) is a form of algorithmic inference from sophisticated statistical techniques. In addition to highlighting developments and prospects in the field, the objective of this systematic study was to methodically describe the state of ML in pediatric oncology.

View Article and Find Full Text PDF

Objectives: Vascular access (VA) stenoses play a significant role in the morbidity of the haemodialysed population. Classifications for diagnosis, assessment and proposal of treatment strategies can be useful clinical and methodological tools. This review aims to present a comprehensive summary and propose further methodological approaches.

View Article and Find Full Text PDF

Quantum mixed-state self-attention network.

Neural Netw

January 2025

Mechanical, Electrical and Information Engineering College, Putian University, Putian, 351100, China.

Attention mechanisms have revolutionized natural language processing. Combining them with quantum computing aims to further advance this technology. This paper introduces a novel Quantum Mixed-State Self-Attention Network (QMSAN) for natural language processing tasks.

View Article and Find Full Text PDF

This study aimed to develop an advanced ensemble approach for automated classification of mental health disorders in social media posts. The research question was: can an ensemble of fine-tuned transformer models (XLNet, RoBERTa, and ELECTRA) with Bayesian hyperparameter optimization improve the accuracy of mental health disorder classification in social media text. Three transformer models (XLNet, RoBERTa, and ELECTRA) were fine-tuned on a dataset of social media posts labelled with 15 distinct mental health disorders.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!