Biped Robots Control in Gusty Environments with Adaptive Exploration Based DDPG.

Biomimetics (Basel)

Graduate School of Information, Production and Systems, Waseda University, Kitakyushu 808-0135, Japan.

Published: June 2024

AI Article Synopsis

  • Bipedal robots are increasingly used in various environments, but maintaining balance during wind disturbances is a major challenge due to their greater complexity compared to wheeled robots.
  • To address this, researchers have developed an adaptive bio-inspired exploration framework based on the Deep Deterministic Policy Gradient (DDPG) approach, which allows robots to adjust to wind forces and optimize their stability.
  • The incorporation of Hindsight Experience Replay (HER) and a reward-reshaping strategy enhances the training process, leading to faster adaptations and improvements in walking efficiency under difficult conditions.

Article Abstract

As technology rapidly evolves, the application of bipedal robots in various environments has widely expanded. These robots, compared to their wheeled counterparts, exhibit a greater degree of freedom and a higher complexity in control, making the challenge of maintaining balance and stability under changing wind speeds particularly intricate. Overcoming this challenge is critical as it enables bipedal robots to sustain more stable gaits during outdoor tasks, thereby increasing safety and enhancing operational efficiency in outdoor settings. To transcend the constraints of existing methodologies, this research introduces an adaptive bio-inspired exploration framework for bipedal robots facing wind disturbances, which is based on the Deep Deterministic Policy Gradient (DDPG) approach. This framework allows the robots to perceive their bodily states through wind force inputs and adaptively modify their exploration coefficients. Additionally, to address the convergence challenges posed by sparse rewards, this study incorporates Hindsight Experience Replay (HER) and a reward-reshaping strategy to provide safer and more effective training guidance for the agents. Simulation outcomes reveal that robots utilizing this advanced method can more swiftly explore behaviors that contribute to stability in complex conditions, and demonstrate improvements in training speed and walking distance over traditional DDPG algorithms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11202199PMC
http://dx.doi.org/10.3390/biomimetics9060346DOI Listing

Publication Analysis

Top Keywords

bipedal robots
12
robots
6
biped robots
4
robots control
4
control gusty
4
gusty environments
4
environments adaptive
4
adaptive exploration
4
exploration based
4
based ddpg
4

Similar Publications

Balance recovery schemes following mediolateral gyroscopic moment perturbations during walking.

PLoS One

December 2024

Lauflabor Locomotion Laboratory, Institute of Sport Science, Centre for Cognitive Science, Technische Universität Darmstadt, Hessen, Germany.

Maintaining balance during human walking hinges on the exquisite orchestration of whole-body angular momentum (WBAM). This study delves into the regulation of WBAM during gait by examining balance strategies in response to upper-body moment perturbations in the frontal plane. A portable Angular Momentum Perturbator (AMP) was utilized in this work, capable of generating perturbation torques on the upper body while minimizing the impact on the center of mass (CoM) excursions.

View Article and Find Full Text PDF

A Whole-Body Coordinated Motion Control Method for Highly Redundant Degrees of Freedom Mobile Humanoid Robots.

Biomimetics (Basel)

December 2024

School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, China.

Humanoid robots are becoming a global research focus. Due to the limitations of bipedal walking technology, mobile humanoid robots equipped with a wheeled chassis and dual arms have emerged as the most suitable configuration for performing complex tasks in factory or home environments. To address the high redundancy issue arising from the wheeled chassis and dual-arm design of mobile humanoid robots, this study proposes a whole-body coordinated motion control algorithm based on arm potential energy optimization.

View Article and Find Full Text PDF

This article introduces a novel perspective on designing a stepping controller for bipedal robots. Typically, designing a state-feedback controller to stabilize a bipedal robot to a periodic orbit of step-to-step (S2S) dynamics based on a reduced-order model (ROM) can achieve stable walking. However, the model discrepancies between the ROM and the full-order dynamic system are often ignored.

View Article and Find Full Text PDF

Touch-down condition control for the bipedal spring-mass model in walking.

Bioinspir Biomim

December 2024

Dynamic Robotics and Artificial Intelligence Laboratory (DRAIL), Oregon State University, Corvallis, OR, United States of America.

Behaviors of animal bipedal locomotion can be described, in a simplified form, by the bipedal spring-mass model. The model provides predictive power, and helps us understand this complex dynamical behavior. In this paper, we analyzed a range of gaits generated by the bipedal spring-mass model during walking, and proposed a stabilizing touch-down condition for the swing leg.

View Article and Find Full Text PDF

In the study of PAM (McKibben-type pneumatic artificial muscle)-driven bipedal robots, it is essential to investigate whether the intrinsic properties of the PAM contribute to achieving stable robot motion. Furthermore, it is crucial to determine if this contribution can be achieved through the interaction between the robot's mechanical structure and the PAM. In previous research, a PAM-driven bipedal musculoskeletal robot was designed based on the principles of the spring-loaded inverted pendulum (SLIP) model.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!