AI Article Synopsis

  • - The text discusses a new approach to few-shot class-incremental learning (FSCIL), which is the challenge of recognizing new categories with only a few examples while retaining knowledge of previously learned categories.
  • - The proposed method, called Limit, utilizes meta-learning to create synthetic tasks that resemble real incremental tasks, helping the model develop a more generalizable understanding of unseen classes.
  • - Limit also includes a calibration module that adjusts the classifiers for old and new classes, ensuring consistent performance, and it has demonstrated superior results on various benchmark datasets, confirming its effectiveness.

Article Abstract

New classes arise frequently in our ever-changing world, e.g., emerging topics in social media and new types of products in e-commerce. A model should recognize new classes and meanwhile maintain discriminability over old classes. Under severe circumstances, only limited novel instances are available to incrementally update the model. The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL). In this work, we propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (Limit), which synthesizes fake FSCIL tasks from the base dataset. The data format of fake tasks is consistent with the 'real' incremental tasks, and we can build a generalizable feature space for the unseen tasks through meta-learning. Besides, Limit also constructs a calibration module based on transformer, which calibrates the old class classifiers and new class prototypes into the same scale and fills in the semantic gap. The calibration module also adaptively contextualizes the instance-specific embedding with a set-to-set function. Limit efficiently adapts to new classes and meanwhile resists forgetting over old classes. Experiments on three benchmark datasets (CIFAR100, miniImageNet, and CUB200) and large-scale dataset, i.e., ImageNet ILSVRC2012 validate that Limit achieves state-of-the-art performance.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2022.3200865DOI Listing

Publication Analysis

Top Keywords

few-shot class-incremental
8
class-incremental learning
8
forgetting classes
8
incremental tasks
8
calibration module
8
classes
7
tasks
6
learning sampling
4
sampling multi-phase
4
multi-phase tasks
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!