Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8569623 | PMC |
http://dx.doi.org/10.3389/fbioe.2021.779353 | DOI Listing |
Biomimetics (Basel)
January 2025
Key Laboratory of Mechanism Theory and Equipment Design, Ministry of Education, Tianjin University, Tianjin 300072, China.
This paper presents a novel soft crawling robot controlled by gesture recognition, aimed at enhancing the operability and adaptability of soft robots through natural human-computer interactions. The Leap Motion sensor is employed to capture hand gesture data, and Unreal Engine is used for gesture recognition. Using the UE4Duino, gesture semantics are transmitted to an Arduino control system, enabling direct control over the robot's movements.
View Article and Find Full Text PDFNeuropsychologia
January 2025
Neuroscience Area, SISSA, Trieste, Italy; Dipartimento di Medicina dei Sistemi, Università di Roma-Tor Vergata, Roma, Italy.
Although gesture observation tasks are believed to invariably activate the action-observation network (AON), we investigated whether the activation of different cognitive mechanisms when processing identical stimuli with different explicit instructions modulates AON activations. Accordingly, 24 healthy right-handed individuals observed gestures and they processed both the actor's moved hand (hand laterality judgment task, HT) and the meaning of the actor's gesture (meaning task, MT). The main brain-level result was that the HT (vs MT) differentially activated the left and right precuneus, the left inferior parietal lobe, the left and right superior parietal lobe, the middle frontal gyri bilaterally and the left precentral gyrus.
View Article and Find Full Text PDFJMIR Res Protoc
January 2025
Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.
Background: Individuals with hearing impairments may face hindrances in health care assistance, which may significantly impact the prognosis and the incidence of complications and iatrogenic events. Therefore, the development of automatic communication systems to assist the interaction between this population and health care workers is paramount.
Objective: This study aims to systematically review the evidence on communication systems using human-computer interaction techniques developed for deaf people who communicate through sign language that are already in use or proposed for use in health care contexts and have been tested with human users or videos of human users.
Data Brief
February 2025
Department of Electrical, Electronic and Communication Engineering, Military Institute of Science and Technology (MIST), Dhaka 1216, Bangladesh.
The dataset represents a significant advancement in Bengali lip-reading and visual speech recognition research, poised to drive future applications and technological progress. Despite Bengali's global status as the seventh most spoken language with approximately 265 million speakers, linguistically rich and widely spoken languages like Bengali have been largely overlooked by the research community. fills this gap by offering a pioneering dataset tailored for Bengali lip-reading, comprising visual data from 150 speakers across 54 classes, encompassing Bengali phonemes, alphabets, and symbols.
View Article and Find Full Text PDFNat Commun
January 2025
Key Lab of Fabrication Technologies for Integrated Circuits Institute of Microelectronics, Chinese Academy of Sciences, 100029, Beijing, China.
Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!