Unlabelled: Prediction of movement intentions from electromyographic (EMG) signals is typically performed with a pattern recognition approach, wherein a short dataframe of raw EMG is compressed into an instantaneous feature-encoding that is meaningful for classification. However, EMG signals are time-varying, implying that a frame-wise approach may not sufficiently incorporate temporal context into predictions, leading to erratic and unstable prediction behavior.
Objective: We demonstrate that sequential prediction models and, specifically, temporal convolutional networks are able to leverage useful temporal information from EMG to achieve superior predictive performance.
Methods: We compare this approach to other sequential and frame-wise models predicting 3 simultaneous hand and wrist degrees-of-freedom from 2 amputee and 13 non-amputee human subjects in a minimally constrained experiment. We also compare these models on the publicly available Ninapro and CapgMyo amputee and non-amputee datasets.
Results: Temporal convolutional networks yield predictions that are more accurate and stable than frame-wise models, especially during inter-class transitions, with an average response delay of 4.6 ms and simpler feature-encoding. Their performance can be further improved with adaptive reinforcement training.
Significance: Sequential models that incorporate temporal information from EMG achieve superior movement prediction performance and these models allow for novel types of interactive training.
Conclusions: Addressing EMG decoding as a sequential modeling problem will lead to enhancements in the reliability, responsiveness, and movement complexity available from prosthesis control systems.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10497232 | PMC |
http://dx.doi.org/10.1109/TBME.2019.2943309 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!