AROS: Affordance Recognition with One-Shot Human Stances.

Front Robot AI

Visual Information Lab, Department of Computer Science, University of Bristol, Bristol, United Kingdom.

Published: May 2023

We present Affordance Recognition with One-Shot Human Stances (AROS), a one-shot learning approach that uses an explicit representation of interactions between highly articulated human poses and 3D scenes. The approach is one-shot since it does not require iterative training or retraining to add new affordance instances. Furthermore, only one or a small handful of examples of the target pose are needed to describe the interactions. Given a 3D mesh of a previously unseen scene, we can predict affordance locations that support the interactions and generate corresponding articulated 3D human bodies around them. We evaluate the performance of our approach on three public datasets of scanned real environments with varied degrees of noise. Through rigorous statistical analysis of crowdsourced evaluations, our results show that our one-shot approach is preferred up to 80% of the time over data-intensive baselines.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10185755PMC
http://dx.doi.org/10.3389/frobt.2023.1076780DOI Listing

Publication Analysis

Top Keywords

affordance recognition
8
recognition one-shot
8
one-shot human
8
human stances
8
articulated human
8
one-shot
5
aros affordance
4
human
4
stances affordance
4
stances aros
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!