Assessing communication abilities in patients with disorders of consciousness (DOCs) is challenging due to limitations in the behavioral scale. Electroencephalogram-based brain-computer interfaces (BCIs) and eye-tracking for detecting ocular changes can capture mental activities without requiring physical behaviors and thus may be a solution. This study proposes a hybrid BCI that integrates EEG and eye tracking to facilitate communication in patients with DOC. Specifically, the BCI presented a question and two randomly flashing answers (yes/no). The subjects were instructed to focus on an answer. A multimodal target recognition network (MTRN) is proposed to detect P300 potentials and eye-tracking responses (i.e., pupil constriction and gaze) and identify the target in real time. In the MTRN, the dual-stream feature extraction module with two independent multiscale convolutional neural networks extracts multiscale features from multimodal data. Then, the multimodal attention strategy adaptively extracts the most relevant information about the target from multimodal data. Finally, a prototype network is designed as a classifier to facilitate small-sample data classification. Ten healthy individuals, nine DOC patients and one LIS patient were included in this study. All healthy subjects achieved 100% accuracy. Five patients could communicate with our BCI, with 76.1±7.9% accuracy. Among them, two patients who were noncommunicative on the behavioral scale exhibited communication ability via our BCI. Additionally, we assessed the performance of unimodal BCIs and compared MTRNs with other methods. All the results suggested that our BCI can yield more sensitive outcomes than the CRS-R and can serve as a valuable communication tool.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2024.3435016DOI Listing

Publication Analysis

Top Keywords

hybrid bci
8
communication patients
8
patients disorders
8
disorders consciousness
8
behavioral scale
8
multimodal data
8
accuracy patients
8
patients
6
communication
5
bci
5

Similar Publications

An Unsupervised Feature Extraction Method based on CLSTM-AE for Accurate P300 Classification in Brain-Computer Interface Systems.

J Biomed Phys Eng

December 2024

Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.

Background: The P300 signal, an endogenous component of event-related potentials, is extracted from an electroencephalography signal and employed in Brain-computer Interface (BCI) devices.

Objective: The current study aimed to address challenges in extracting useful features from P300 components and detecting P300 through a hybrid unsupervised manner based on Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM).

Material And Methods: In this cross-sectional study, CNN as a useful method for the P300 classification task emphasizes spatial characteristics of data.

View Article and Find Full Text PDF

A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding.

J Neural Eng

December 2024

West China Hospital of Sichuan University, No.37 Guoxue Alley, Wuhou District, Chengdu City, Sichuan Province, Chengdu, Sichuan, 610041, CHINA.

Objective: Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a Transformer with modified locally linear embedding and sliding window convolution for EEG decoding.

View Article and Find Full Text PDF

MSHANet: a multi-scale residual network with hybrid attention for motor imagery EEG decoding.

Cogn Neurodyn

December 2024

Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin, China.

EEG decoding plays a crucial role in the development of motor imagery brain-computer interface. Deep learning has great potential to automatically extract EEG features for end-to-end decoding. Currently, the deep learning is faced with the chanllenge of decoding from a large amount of time-variant EEG to retain a stable peroformance with different sessions.

View Article and Find Full Text PDF

Coherence-based channel selection and Riemannian geometry features for magnetoencephalography decoding.

Cogn Neurodyn

December 2024

National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, 710049 China.

Magnetoencephalography (MEG) records the extremely weak magnetic fields on the surface of the scalp through highly sensitive sensors. Multi-channel MEG data provide higher spatial and temporal resolution when measuring brain activities, and can be applied for brain-computer interfaces as well. However, a large number of channels leads to high computational complexity and can potentially impact decoding accuracy.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!