AI Article Synopsis

  • The study highlights the limitations of existing Human-Machine Interfaces (HMI) that use surface electromyography (sEMG) for gesture recognition, specifically in applications like prosthetics and rehabilitation.
  • A new strategy utilizing wearable A-mode ultrasound and a two-stage cascade model is introduced, which effectively classifies grasping gestures while simultaneously estimating applied force.
  • Experimental results show that this new method outperforms traditional models in both classification and force estimation, achieving fast real-time recognition suitable for practical applications.

Article Abstract

The existing Human-Machine Interfaces (HMI) based on gesture recognition using surface electromyography (sEMG) have made significant progress. However, the sEMG has inherent limitations as well as the gesture classification and force estimation have not been effectively combined. There are limitations in applications such as prosthetic control and clinical rehabilitation, etc. In this paper, a grasping gesture and force recognition strategy based on wearable A-mode ultrasound and two-stage cascade model is proposed, which can simultaneously estimate the force while classifying the grasping gesture. This paper experiments five grasping gestures and four force levels (5-50%MVC). The results demonstrate that the performance of the proposed model is significantly better than that of the traditional model both in classification and regression (p < 0.001). Additionally, the two-stage cascade regression model (TSCRM) used the Gaussian Process regression model (GPR) with the mean and standard deviation (MSD) feature obtains excellent results, with normalized root-mean-square error (nRMSE) and correlation coefficient (CC) of 0.10490.0374 and 0.94610.0354, respectively. Besides, the latency of the model meets the requirement of real-time recognition (T < 15ms). Therefore, the research outcomes prove the feasibility of the proposed recognition strategy and provide a reference for the field of prosthetic control, etc.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2022.3196926DOI Listing

Publication Analysis

Top Keywords

gesture classification
8
classification force
8
force estimation
8
strategy based
8
based wearable
8
wearable a-mode
8
a-mode ultrasound
8
cascade model
8
prosthetic control
8
grasping gesture
8

Similar Publications

Liquid-Metal-Based Multichannel Strain Sensor for Sign Language Gesture Classification Using Machine Learning.

ACS Appl Mater Interfaces

January 2025

Centre for Robotics and Automation, Department of Biomedical Engineering, City University of Hong Kong, Hong Kong 999077, China.

Liquid metals are highly conductive like metallic materials and have excellent deformability due to their liquid state, making them rather promising for flexible and stretchable wearable sensors. However, patterning liquid metals on soft substrates has been a challenge due to high surface tension. In this paper, a new method is proposed to overcome the difficulties in fabricating liquid-state strain sensors.

View Article and Find Full Text PDF

Exploring pattern-specific components associated with hand gestures through different sEMG measures.

J Neuroeng Rehabil

December 2024

School of Information Science and Technology, Fudan University, Shanghai, 200433, China.

For surface electromyography (sEMG) based human-machine interaction systems, accurately recognizing the users' gesture intent is crucial. However, due to the existence of subject-specific components in sEMG signals, subject-specific models may deteriorate when applied to new users. In this study, we hypothesize that in addition to subject-specific components, sEMG signals also contain pattern-specific components, which is independent of individuals and solely related to gesture patterns.

View Article and Find Full Text PDF

Surface electromyography (sEMG) data has been extensively utilized in deep learning algorithms for hand movement classification. This paper aims to introduce a novel method for hand gesture classification using sEMG data, addressing accuracy challenges seen in previous studies. We propose a U-Net architecture incorporating a MobileNetV2 encoder, enhanced by a novel Bidirectional Long Short-Term Memory (BiLSTM) and metaheuristic optimization for spatial feature extraction in hand gesture and motion recognition.

View Article and Find Full Text PDF

Brain-computer interfaces (BCIs) are evolving toward higher electrode count and fully implantable solutions, which require extremely low power densities (<15mW cm). To achieve this target, and allow for a large and scalable number of channels, flexible electronics can be used as a multiplexing interface. This work introduces an active analog front-end fabricated with amorphous Indium-Gallium-Zinx-Oxide (a-IGZO) Thin-Film Transistors (TFTs) on foil capable of active matrix multiplexing.

View Article and Find Full Text PDF
Article Synopsis
  • Event-based cameras excel in human action recognition (HAR) due to their high dynamic range and efficiency, making them ideal for capturing fast movements.
  • Spike Neural Networks (SNNs) are particularly effective with event camera data because they operate on a spike-driven paradigm, which offers lower power consumption than traditional neural networks.
  • The paper introduces two innovative SNN models, Spike-HAR and Spike-HAR++, which enhance HAR accuracy through advanced spike attention mechanisms and efficient architecture, demonstrating impressive classification performance with minimal energy usage.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!