In the task incremental learning problem, deep learning models suffer from catastrophic forgetting of previously seen classes/tasks as they are trained on new classes/tasks. This problem becomes even harder when some of the test classes do not belong to the training class set, i.e., the task incremental generalized zero-shot learning problem. We propose a novel approach to address the task incremental learning problem for both the non zero-shot and zero-shot settings. Our proposed approach, called Rectification-based Knowledge Retention (RKR), applies weight rectifications and affine transformations for adapting the model to any task. During testing, our approach can use the task label information (task-aware) to quickly adapt the network to that task. We also extend our approach to make it task-agnostic so that it can work even when the task label information is not available during testing. Specifically, given a continuum of test data, our approach predicts the task and quickly adapts the network to the predicted task. We experimentally show that our proposed approach achieves state-of-the-art results on several benchmark datasets for both non zero-shot and zero-shot task incremental learning.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2022.3225310DOI Listing

Publication Analysis

Top Keywords

task incremental
20
incremental learning
16
learning problem
12
task
11
rectification-based knowledge
8
knowledge retention
8
zero-shot zero-shot
8
proposed approach
8
task label
8
learning
6

Similar Publications

Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks, as old data from previous tasks is unavailable when learning a new task. To address this, some methods propose replaying data from previous tasks during new task learning, typically using extra memory to store replay data. However, it is not expected in practice due to memory constraints and data privacy issues.

View Article and Find Full Text PDF

Objective: This study investigated the effect of reliability on the function allocation (FA) boundary by examining the interaction effect of degree of automation (DOA) and reliability on routine performance, failure performance, and attention allocation.

Background: According to the lumberjack effect, an increase in DOA will typically improve routine performance, while failure performance may remain undeteriorated until a specific, high DOA threshold is reached. This threshold can be regarded as the FA boundary.

View Article and Find Full Text PDF

Fast-chargeable lithium-ion batteries by μ-Si anode-tailored full-cell design.

Proc Natl Acad Sci U S A

January 2025

School of Chemical and Biological Engineering and Institute of Chemical Processes, Seoul National University, Seoul 08826, Republic of Korea.

Silicon (Si) anodes have long been recognized to significantly improve the energy density and fast-charging capability of lithium-ion batteries (LIBs). However, the implementation of these anodes in commercial LIB cells has progressed incrementally due to the immense volume change of Si across its full state-of-charge (SOC) range. Here, we report an anode-tailored full-cell design (ATFD), which incorporates micrometer-sized silicon (μ-Si) alone, for operation over a limited, prespecified SOC range identified as 30-70%.

View Article and Find Full Text PDF

Computational modeling has revealed that human research participants use both rapid working memory (WM) and incremental reinforcement learning (RL) (RL+WM) to solve a simple instrumental learning task, relying on WM when the number of stimuli is small and supplementing with RL when the number of stimuli exceeds WM capacity. Inspired by this work, we examined which learning systems and strategies are used by adolescent and adult mice when they first acquire a conditional associative learning task. In a version of the human RL+WM task translated for rodents, mice were required to associate odor stimuli (from a set of 2 or 4 odors) with a left or right port to receive reward.

View Article and Find Full Text PDF

Similarity-based context aware continual learning for spiking neural networks.

Neural Netw

December 2024

Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences, Shanghai, China; Center for Long-term Artificial Intelligence, Beijing, China. Electronic address:

Biological brains have the capability to adaptively coordinate relevant neuronal populations based on the task context to learn continuously changing tasks in real-world environments. However, existing spiking neural network-based continual learning algorithms treat each task equally, ignoring the guiding role of different task similarity associations for network learning, which limits knowledge utilization efficiency. Inspired by the context-dependent plasticity mechanism of the brain, we propose a Similarity-based Context Aware Spiking Neural Network (SCA-SNN) continual learning algorithm to efficiently accomplish task incremental learning and class incremental learning.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!