Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1), (2), and a newly identified confound faced by discrete systems (3). Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1088/1741-2552/ad4915 | DOI Listing |
J Neural Eng
January 2025
Electrical and Computer Engineering Department, University of New Brunswick, 3 Bailey Dr., Fredericton, New Brunswick, E3B5A3, CANADA.
Objective: While myoelectric control has been commercialized in prosthetics for decades, its adoption for more general human-machine interaction has been slow. Although high accuracies can be achieved across many gestures, current control approaches are prone to false activations in real-world conditions. This is because the same electromyogram (EMG) signals generated during the elicitation of gestures are also naturally activated when performing activities of daily living (ADLs), such as when driving to work or while typing on a keyboard.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Health Science and Technology, Aalborg University, Aalborg, Denmark.
EMG feedback improves force control of a myoelectric hand prosthesis by conveying the magnitude of the myoelectric signal back to the users via tactile stimulation. The present study aimed to test if this method can be used by a participant with a high-level amputation, and whose muscle used for prosthesis control (pectoralis major) was not intuitively related to hand function. Vibrotactile feedback was delivered to the participant's torso, while the control was tested using EMG from three different muscles.
View Article and Find Full Text PDFBiomimetics (Basel)
December 2024
School of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004, China.
Surface electromyography (sEMG) signals reflect the local electrical activity of muscle fibers and the synergistic action of the overall muscle group, making them useful for gesture control of myoelectric manipulators. In recent years, deep learning methods have increasingly been applied to sEMG gesture recognition due to their powerful automatic feature extraction capabilities. sEMG signals contain rich local details and global patterns, but single-scale convolutional networks are limited in their ability to capture both comprehensively, which restricts model performance.
View Article and Find Full Text PDFJ Neuroeng Rehabil
December 2024
Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.
Background: This research aims to improve the control of assistive devices for individuals with hemiparesis after stroke by providing intuitive and proportional motor control. Stroke is the leading cause of disability in the United States, with 80% of stroke-related disability coming in the form of hemiparesis, presented as weakness or paresis on half of the body. Current assistive exoskeletonscontrolled via electromyography do not allow for fine force regulation.
View Article and Find Full Text PDFFront Neurorobot
December 2024
School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom.
Introduction: Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!