In the few-shot class incremental learning (FSCIL) setting, new classes with few training examples become available incrementally, and deep learning models suffer from catastrophic forgetting of the previous classes when trained on new classes. Data augmentation techniques are generally used to increase the training data and improve the model performance. In this work, we demonstrate that differently augmented views of the same image obtained by applying data augmentations may not necessarily activate the same set of neurons in the model. Therefore, the information gained by a model regarding a class, when trained using data augmentation, may not necessarily be stored in the same set of neurons in the model. Consequently, during incremental training, even if some of the model weights that store the previously seen class information for a particular view get overwritten, the information of the previous classes for the other views may still remain intact in the other model weights. Therefore, the impact of catastrophic forgetting on the model predictions is different for different data augmentations used during training. Based on this, we present an Augmentation-based Prediction Rectification (APR) approach to reduce the impact of catastrophic forgetting in the FSCIL setting. APR can also augment other FSCIL approaches and significantly improve their performance. We also propose a novel feature synthesis module (FSM) for synthesizing features relevant to the previously seen classes without requiring training data from these classes. FSM outperforms other generative approaches in this setting. We experimentally show that our approach outperforms other methods on benchmark datasets.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.06.043DOI Listing

Publication Analysis

Top Keywords

catastrophic forgetting
12
fscil setting
8
previous classes
8
data augmentation
8
training data
8
data augmentations
8
set neurons
8
neurons model
8
model weights
8
impact catastrophic
8

Similar Publications

Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks, as old data from previous tasks is unavailable when learning a new task. To address this, some methods propose replaying data from previous tasks during new task learning, typically using extra memory to store replay data. However, it is not expected in practice due to memory constraints and data privacy issues.

View Article and Find Full Text PDF

Background: The Automatic Essay Score (AES) prediction system is essential in education applications. The AES system uses various textural and grammatical features to investigate the exact score value for AES. The derived features are processed by various linear regressions and classifiers that require the learning pattern to improve the overall score.

View Article and Find Full Text PDF

Label-Guided relation prototype generation for Continual Relation Extraction.

PeerJ Comput Sci

October 2024

Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia.

Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data. To address the problem of catastrophic forgetting, some existing research endeavors have focused on exploring memory replay methods by storing typical historical learned instances or embedding all observed relations as prototypes by averaging the hidden representation of samples and replaying them in the subsequent training process. However, this prototype generation method overlooks the rich semantic information within the label namespace and are also constrained by memory size, resulting in inadequate descriptions of relation semantics by relation prototypes.

View Article and Find Full Text PDF

The limitations of deep neural networks in continuous learning stem from oversimplifying the complexities of biological neural circuits, often neglecting the dynamic balance between memory stability and learning plasticity. In this study, artificial synaptic devices enhanced with graphene quantum dots (GQDs) that exhibit metaplasticity is introduced, a higher-order form of synaptic plasticity that facilitates the dynamic regulation of memory and learning processes similar to those observed in biological systems. The GQDs-assisted devices utilize interface-mediated modifications in asymmetric conductive pathways, replicating classical synaptic plasticity mechanisms.

View Article and Find Full Text PDF

Improving forward compatibility in class incremental learning by increasing representation rank and feature richness.

Neural Netw

December 2024

Interdisciplinary Program in Artificial Intelligence, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea; Department of Intelligence and Information, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea; Research Institute for Convergence Science, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea. Electronic address:

Class Incremental Learning (CIL) constitutes a pivotal subfield within continual learning, aimed at enabling models to progressively learn new classification tasks while retaining knowledge obtained from prior tasks. Although previous studies have predominantly focused on backward compatible approaches to mitigate catastrophic forgetting, recent investigations have introduced forward compatible methods to enhance performance on novel tasks and complement existing backward compatible methods. In this study, we introduce effective-Rank based Feature Richness enhancement (RFR) method that is designed for improving forward compatibility.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!