Due to inter-subject variability in electroencephalogram (EEG) signals, the generalization ability of many existing brain-computer interface (BCI) models is significantly limited. Although transfer learning (TL) offers a temporary solution, in scenarios requiring sustained knowledge transfer, the performance of TL-based models gradually declines as the number of transfers increases-a phenomenon known as catastrophic forgetting. To address this issue, we introduce a novel domain-incremental learning framework for the continual motor imagery (MI) EEG classification. Specifically, to learn and retain common features between subjects, we separate latent representations into subject-invariant and subject-specific features through adversarial training, while also proposing an extensible architecture to preserve features that are easily forgotten. Additionally, we incorporate a memory replay mechanism to reinforce previously acquired knowledge. Through extensive experiments, we demonstrate our framework's effectiveness in mitigating forgetting within the continual MI-EEG classification task.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC53108.2024.10781886 | DOI Listing |
Annu Int Conf IEEE Eng Med Biol Soc
July 2024
Due to inter-subject variability in electroencephalogram (EEG) signals, the generalization ability of many existing brain-computer interface (BCI) models is significantly limited. Although transfer learning (TL) offers a temporary solution, in scenarios requiring sustained knowledge transfer, the performance of TL-based models gradually declines as the number of transfers increases-a phenomenon known as catastrophic forgetting. To address this issue, we introduce a novel domain-incremental learning framework for the continual motor imagery (MI) EEG classification.
View Article and Find Full Text PDFChest X-ray (CXR) images have been widely adopted in clinical care and pathological diagnosis in recent years. Some advanced methods on CXR classification task achieve impressive performance by training the model statically. However, in the real clinical environment, the model needs to learn continually and this can be viewed as a domain incremental learning (DIL) problem.
View Article and Find Full Text PDFInterdiscip Sci
February 2025
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
Continual learning is the ability of a model to learn over time without forgetting previous knowledge. Therefore, adapting new data in dynamic fields like disease outbreak prediction is paramount. Deep neural networks are prone to error due to catastrophic forgetting.
View Article and Find Full Text PDFJ Neural Eng
March 2025
Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, People's Republic of China.
. Epilepsy is a neurological disorder that affects millions of patients worldwide. Electroencephalogram-based seizure detection plays a crucial role in its timely diagnosis and effective monitoring.
View Article and Find Full Text PDFComput Biol Med
November 2024
Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China. Electronic address:
In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!