Exoskeletons have enormous potential to improve human locomotive performance. However, their development and broad dissemination are limited by the requirement for lengthy human tests and handcrafted control laws. Here we show an experiment-free method to learn a versatile control policy in simulation. Our learning-in-simulation framework leverages dynamics-aware musculoskeletal and exoskeleton models and data-driven reinforcement learning to bridge the gap between simulation and reality without human experiments. The learned controller is deployed on a custom hip exoskeleton that automatically generates assistance across different activities with reduced metabolic rates by 24.3%, 13.1% and 15.4% for walking, running and stair climbing, respectively. Our framework may offer a generalizable and scalable strategy for the rapid development and widespread adoption of a variety of assistive robots for both able-bodied and mobility-impaired individuals.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344585 | PMC |
http://dx.doi.org/10.1038/s41586-024-07382-4 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!