In this paper a bio-inspired control architecture for a robotic hand is presented. It relies on the same mechanisms of learning inverse internal models studied in humans. The control is capable of developing an internal representation of the hand interacting with the environment and updating it by means of the interaction forces that arise during contact. The learning paradigm exploits LWPR networks, which allow efficient incremental online learning through the use of spatially localized linear regression models. Additionally this paradigm limits negative interference when learning multiple tasks. The architecture is validated on a simulated finger of the DLR-HIT-Hand II performing closing movements in presence of two different viscous force fields, perturbing its motion.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/IEMBS.2010.5627411 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!