Gesture recognition technology based on millimeter-wave radar can recognize and classify user gestures in non-contact scenarios. To address the complexity of data processing with multi-feature inputs in neural networks and the poor recognition performance with single-feature inputs, this paper proposes a gesture recognition algorithm based on esNet ong Short-Term Memory with an ttention Mechanism (RLA). In the aspect of signal processing in RLA, a range-Doppler map is obtained through the extraction of the range and velocity features in the original mmWave radar signal. Regarding the network architecture in RLA, the relevant features of the residual network with channel and spatial attention modules are combined to prevent some useful information from being neglected. We introduce a residual attention mechanism to enhance the network's focus on gesture features and avoid the impact of irrelevant features on recognition accuracy. Additionally, we use a long short-term memory network to process temporal features, ensuring high recognition accuracy even with single-feature inputs. A series of experimental results show that the algorithm proposed in this paper has higher recognition performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/s25020469 | DOI Listing |
Sensors (Basel)
January 2025
College of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
Gesture recognition technology based on millimeter-wave radar can recognize and classify user gestures in non-contact scenarios. To address the complexity of data processing with multi-feature inputs in neural networks and the poor recognition performance with single-feature inputs, this paper proposes a gesture recognition algorithm based on esNet ong Short-Term Memory with an ttention Mechanism (RLA). In the aspect of signal processing in RLA, a range-Doppler map is obtained through the extraction of the range and velocity features in the original mmWave radar signal.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Artifcial Intelligence, Chung-Ang University, Heukseok-dong, Dongjak-gu, Seoul 06974, Republic of Korea.
Sensor-based gesture recognition on mobile devices is critical to human-computer interaction, enabling intuitive user input for various applications. However, current approaches often rely on server-based retraining whenever new gestures are introduced, incurring substantial energy consumption and latency due to frequent data transmission. To address these limitations, we present the first on-device continual learning framework for gesture recognition.
View Article and Find Full Text PDFBiomimetics (Basel)
January 2025
Key Laboratory of Mechanism Theory and Equipment Design, Ministry of Education, Tianjin University, Tianjin 300072, China.
This paper presents a novel soft crawling robot controlled by gesture recognition, aimed at enhancing the operability and adaptability of soft robots through natural human-computer interactions. The Leap Motion sensor is employed to capture hand gesture data, and Unreal Engine is used for gesture recognition. Using the UE4Duino, gesture semantics are transmitted to an Arduino control system, enabling direct control over the robot's movements.
View Article and Find Full Text PDFNeuropsychologia
January 2025
Neuroscience Area, SISSA, Trieste, Italy; Dipartimento di Medicina dei Sistemi, Università di Roma-Tor Vergata, Roma, Italy.
Although gesture observation tasks are believed to invariably activate the action-observation network (AON), we investigated whether the activation of different cognitive mechanisms when processing identical stimuli with different explicit instructions modulates AON activations. Accordingly, 24 healthy right-handed individuals observed gestures and they processed both the actor's moved hand (hand laterality judgment task, HT) and the meaning of the actor's gesture (meaning task, MT). The main brain-level result was that the HT (vs MT) differentially activated the left and right precuneus, the left inferior parietal lobe, the left and right superior parietal lobe, the middle frontal gyri bilaterally and the left precentral gyrus.
View Article and Find Full Text PDFJMIR Res Protoc
January 2025
Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.
Background: Individuals with hearing impairments may face hindrances in health care assistance, which may significantly impact the prognosis and the incidence of complications and iatrogenic events. Therefore, the development of automatic communication systems to assist the interaction between this population and health care workers is paramount.
Objective: This study aims to systematically review the evidence on communication systems using human-computer interaction techniques developed for deaf people who communicate through sign language that are already in use or proposed for use in health care contexts and have been tested with human users or videos of human users.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!