In the field of brain-to-text communication, it is difficult to finish highly dexterous behaviors of writing multi-character by motor-imagery-based brain-computer interface (MI-BCI), setting a barrier to restore communication in people who have lost the ability to move and speak. In this paper, we design and implement a multi-character classification scheme based on 29 characters of motor imagery (MI) electroencephalogram (EEG) signals, which contains 26 English letters and 3 punctuation marks. Firstly, we design a novel experimental paradigm to increase the variety of BCI inputs by asking subjects to imagine the movement of writing 29 characters instead of gross motor skills such as reaching or grasping. Secondly, because of the high dimension of EEG signals, we adopt power spectral density (PSD), principal components analysis (PCA), kernel principal components analysis (KPCA) respectively to decompose EEG signals and extract feature, and then test the results with pearson product-moment correlation coefficient (PCCs). Thirdly, we respectively employ k-nearest neighbor (kNN), support vector machine (SVM), extreme learning machine (ELM) and light gradient boosting machine (LightGBM) to classify 29 characters and compare the results. We have implemented a complete scheme, including paradigm design, signal acquisition, feature extraction and classification, which can effectively classify 29 characters. The experimental results show that the KPCA has the best feature extraction effect and the kNN has the highest classification accuracy, with the final classification accuracy reaching 96.2%, which is better than other studies.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neuroscience.2023.12.001 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!