Adversarial artifact detection in EEG-based brain-computer interfaces.

J Neural Eng

Belt and Road Joint Laboratory on Measurement and Control Technology, Huazhong University of Science and Technology, Wuhan, People's Republic of China.

Published: October 2024

AI Article Synopsis

  • Machine learning has significantly advanced EEG-based brain-computer interfaces (BCIs), yet these systems are susceptible to adversarial attacks that can lead to misclassification of signals.
  • This paper introduces the first exploration of adversarial detection specifically for EEG-based BCIs, applying various detection techniques from computer vision and proposing new methods based on Mahalanobis and cosine distances.
  • The study evaluated eight detection approaches across three EEG datasets and types of adversarial attacks, achieving a high accuracy in detecting white-box attacks, which could help improve the overall security and robustness of BCI models.

Article Abstract

. machine learning has achieved significant success in electroencephalogram (EEG) based brain-computer interfaces (BCIs), with most existing research focusing on improving the decoding accuracy. However, recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification. Detecting adversarial examples is crucial for both understanding this phenomenon and developing effective defense strategies.. this paper, for the first time, explores adversarial detection in EEG-based BCIs. We extend several popular adversarial detection approaches from computer vision to BCIs. Two new Mahalanobis distance based adversarial detection approaches, and three cosine distance based adversarial detection approaches, are also proposed, which showed promising performance in detecting three kinds of white-box attacks.. we evaluated the performance of eight adversarial detection approaches on three EEG datasets, three neural networks, and four types of adversarial attacks. Our approach achieved an area under the curve score of up to 99.99% in detecting white-box attacks. Additionally, we assessed the transferability of different adversarial detectors to unknown attacks.. through extensive experiments, we found that white-box attacks may be easily detected, and differences exist in the distributions of different types of adversarial examples. Our work should facilitate understanding the vulnerability of existing BCI models and developing more secure BCIs in the future.

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/ad8964DOI Listing

Publication Analysis

Top Keywords

adversarial detection
20
detection approaches
16
white-box attacks
12
adversarial
11
detection eeg-based
8
brain-computer interfaces
8
eeg-based bcis
8
adversarial attacks
8
adversarial examples
8
distance based
8

Similar Publications

Purpose: The survival rate of breast cancer for women in low- and middle-income countries is poor compared with that in high-income countries. Point-of-care ultrasound (POCUS) combined with deep learning could potentially be a suitable solution enabling early detection of breast cancer. We aim to improve a classification network dedicated to classifying POCUS images by comparing different techniques for increasing the amount of training data.

View Article and Find Full Text PDF

Supracondylar humerus fractures in children are among the most common elbow fractures in pediatrics. However, their diagnosis can be particularly challenging due to the anatomical characteristics and imaging features of the pediatric skeleton. In recent years, convolutional neural networks (CNNs) have achieved notable success in medical image analysis, though their performance typically relies on large-scale, high-quality labeled datasets.

View Article and Find Full Text PDF

Physical Layer Security (PLS) in Cognitive Radio Networks (CRN) improves the confidentiality, availability, and integrity of the external communication between the devices/ users. The security models for sensing and beamforming reduce the impact of adversaries such as eavesdroppers in the signal processing layer. To such an extent, this article introduces a Secure Channel Estimation Model (SCEM) using Channel State Information (CSI) and Deep Learning (DL) to improve the PLS.

View Article and Find Full Text PDF

Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.

View Article and Find Full Text PDF

An adversarial transformer for anomalous lamb wave pattern detection.

Neural Netw

January 2025

Department of Mechanical Engineering, University of South Carolina, Columbia, SC 29208, USA. Electronic address:

Lamb waves are widely used for defect detection in structural health monitoring, and various methods are developed for Lamb wave data analysis. This paper presents an unsupervised Adversarial Transformer model for anomalous Lamb wave pattern detection by analyzing the spatiotemporal images generated by a hybrid PZT-scanning laser Doppler vibrometer (SLDV). The model includes the global attention and the local attention mechanisms, and both are trained adversarially.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!