A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

PLoS One

Department of Computer Science & Engineering, University of Washington, Seattle, WA, United States of America.

Published: June 2016

A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4633237PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0141965PLOS

Publication Analysis

Top Keywords

goal-based imitation
16
human actions
16
approach robotic
8
robotic learning
8
approach allows
8
imitation robotic
8
human-robot collaboration
8
robot learns
8
probabilistic model
8
actions
7

Similar Publications

It is often put forward that in-group members are imitated more strongly than out-group members. However, the validity of this claim has been questioned as recent investigations were not able to find differences for the imitation of in- versus out-group members. A central characteristic of these failed replications is their mere focus on movement-based imitation, thereby neglecting to take into consideration the superior goal of the movements.

View Article and Find Full Text PDF

In past research on imitation, some findings suggest that imitation is goal based, whereas other findings suggest that imitation can also be based on a direct mapping of a model's movements without necessarily adopting the model's goal. We argue that the 2 forms of imitation are flexibly deployed in accordance with the psychological distance from the model. We specifically hypothesize that individuals are relatively more likely to imitate the model's goals when s/he is distant but relatively more likely to imitate the model's specific movements when s/he is proximal.

View Article and Find Full Text PDF

A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration.

View Article and Find Full Text PDF

Muecas: a multi-sensor robotic head for affective human robot interaction and imitation.

Sensors (Basel)

April 2014

RoboLab, Robotics and Artificial Vision Laboratory, University of Extremadura, Escuela Politécnica, Avenida de la Universidad s/n, Cáceres, Spain.

This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions.

View Article and Find Full Text PDF

Infants need to analyze human behavior in terms of goal-directed actions in order to form expectations about agents' rationality. There is converging evidence for goal encoding during the first year of life from looking time as well as social learning paradigms using imitation procedures. However, conceptual interpretations of these abilities are challenged by low-level motor resonance accounts that propose task-specific lower level sensorimotor associations underlying looking time tasks rather than abstract conceptual knowledge.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!