A bio-inspired control architecture for a robotic hand is developed, mimicking human learning of inverse internal models for better interaction with the environment.
It utilizes LWPR networks for efficient online learning, enabling the hand to update its internal representation as it experiences different interactions and forces.
The architecture is tested with a simulated finger, demonstrating effective performance in closing movements under varying viscous force fields.