Collaborative robots are currently deployed in professional environments, in collaboration with professional human operators, helping to strike the right balance between mechanization and manual intervention in manufacturing processes required by Industry 4.0. In this paper, the contribution of gesture recognition and pose estimation to the smooth introduction of cobots into an industrial assembly line is described, with a view to performing actions in parallel with the human operators and enabling interaction between them. The proposed active vision system uses two RGB-D cameras that record different points of view of gestures and poses of the operator, to build an external perception layer for the robot that facilitates spatiotemporal adaptation, in accordance with the human's behavior. The use-case of this work is concerned with LCD TV assembly of an appliance manufacturer, comprising of two parts. The first part of the above-mentioned operation is assigned to a robot, strengthening the assembly line. The second part is assigned to a human operator. Gesture recognition, pose estimation, physical interaction, and sonic notification, create a multimodal human-robot interaction system. Five experiments are performed, to test if gesture recognition and pose estimation can reduce the cycle time and range of motion of the operator, respectively. Physical interaction is achieved using the force sensor of the cobot. Pose estimation through a skeleton-tracking algorithm provides the cobot with human pose information and makes it spatially adjustable. Sonic notification is added for the case of unexpected incidents. A real-time gesture recognition module is implemented through a Deep Learning architecture consisting of Convolutional layers, trained in an egocentric view and reducing the cycle time of the routine by almost 20%. This constitutes an added value in this work, as it affords the potential of recognizing gestures independently of the anthropometric characteristics and the background. Common metrics derived from the literature are used for the evaluation of the proposed system. The percentage of spatial adaptation of the cobot is proposed as a new KPI for a collaborative system and the opinion of the human operator is measured through a questionnaire that concerns the various affective states of the operator during the collaboration.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8649894PMC
http://dx.doi.org/10.3389/fnbot.2021.703545DOI Listing

Publication Analysis

Top Keywords

gesture recognition
20
pose estimation
16
recognition pose
12
spatiotemporal adaptation
8
collaborative robots
8
human operators
8
human operator
8
physical interaction
8
sonic notification
8
cycle time
8

Similar Publications

Objectives: In recent years, significant progress has been made in the research of gesture recognition using surface electromyography (sEMG) signals based on machine learning and deep learning techniques. The main motivation for sEMG gesture recognition research is to provide more natural, convenient, and personalized human-computer interaction, which makes research in this field have considerable application prospects in rehabilitation technology. However, the existing gesture recognition algorithms still need to be further improved in terms of global feature capture, model computational complexity, and generalizability.

View Article and Find Full Text PDF

In human-computer interaction, gesture recognition based on physiological signals offers advantages such as a more natural and fast interaction mode and less constrained by the environment than visual-based. Surface electromyography-based gesture recognition has significantly progressed. However, since individuals have physical differences, researchers must collect data multiple times from each user to train the deep learning model.

View Article and Find Full Text PDF

Gesture recognition technology based on millimeter-wave radar can recognize and classify user gestures in non-contact scenarios. To address the complexity of data processing with multi-feature inputs in neural networks and the poor recognition performance with single-feature inputs, this paper proposes a gesture recognition algorithm based on esNet ong Short-Term Memory with an ttention Mechanism (RLA). In the aspect of signal processing in RLA, a range-Doppler map is obtained through the extraction of the range and velocity features in the original mmWave radar signal.

View Article and Find Full Text PDF

Sensor-based gesture recognition on mobile devices is critical to human-computer interaction, enabling intuitive user input for various applications. However, current approaches often rely on server-based retraining whenever new gestures are introduced, incurring substantial energy consumption and latency due to frequent data transmission. To address these limitations, we present the first on-device continual learning framework for gesture recognition.

View Article and Find Full Text PDF

A Symmetrical Leech-Inspired Soft Crawling Robot Based on Gesture Control.

Biomimetics (Basel)

January 2025

Key Laboratory of Mechanism Theory and Equipment Design, Ministry of Education, Tianjin University, Tianjin 300072, China.

This paper presents a novel soft crawling robot controlled by gesture recognition, aimed at enhancing the operability and adaptability of soft robots through natural human-computer interactions. The Leap Motion sensor is employed to capture hand gesture data, and Unreal Engine is used for gesture recognition. Using the UE4Duino, gesture semantics are transmitted to an Arduino control system, enabling direct control over the robot's movements.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!