Introduction: Recent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.
Methods: We simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 /. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.
Results: Simulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.
Discussion: We finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10601656 | PMC |
http://dx.doi.org/10.3389/fnbot.2023.1244417 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!