Surface electromyography (sEMG) signals reflect the local electrical activity of muscle fibers and the synergistic action of the overall muscle group, making them useful for gesture control of myoelectric manipulators. In recent years, deep learning methods have increasingly been applied to sEMG gesture recognition due to their powerful automatic feature extraction capabilities. sEMG signals contain rich local details and global patterns, but single-scale convolutional networks are limited in their ability to capture both comprehensively, which restricts model performance. This paper proposes a deep learning model based on multi-scale feature fusion-MS-CLSTM (MS Block-ResCBAM-Bi-LSTM). The MS Block extracts local details, global patterns, and inter-channel correlations in sEMG signals using convolutional kernels of different scales. The ResCBAM, which integrates CBAM and Simple-ResNet, enhances attention to key gesture information while alleviating overfitting issues common in small-sample datasets. Experimental results demonstrate that the MS-CLSTM model achieves recognition accuracies of 86.66% and 83.27% on the Ninapro DB2 and DB4 datasets, respectively, and the accuracy can reach 89% in real-time myoelectric manipulator gesture prediction experiments. The proposed model exhibits superior performance in sEMG gesture recognition tasks, offering an effective solution for applications in prosthetic hand control, robotic control, and other human-computer interaction fields.

Download full-text PDF

Source
http://dx.doi.org/10.3390/biomimetics9120784DOI Listing

Publication Analysis

Top Keywords

gesture recognition
12
semg signals
12
myoelectric manipulator
8
manipulator gesture
8
based multi-scale
8
multi-scale feature
8
deep learning
8
semg gesture
8
local details
8
details global
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!