MoCA is a bi-modal dataset in which we collect Motion Capture data and video sequences acquired from multiple views, including an ego-like viewpoint, of upper body actions in a cooking scenario. It has been collected with the specific purpose of investigating view-invariant action properties in both biological and artificial systems. Besides that, it represents an ideal test bed for research in a number of fields - including cognitive science and artificial vision - and application domains - as motor control and robotics. Compared to other benchmarks available, MoCA provides a unique compromise for research communities leveraging very different approaches to data gathering: from one extreme of action recognition in the wild - the standard practice nowadays in the fields of Computer Vision and Machine Learning - to motion analysis in very controlled scenarios - as for motor control in biomedical applications. In this work we introduce the dataset and its peculiarities, and discuss a baseline analysis as well as examples of applications for which the dataset is well suited.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7738546PMC
http://dx.doi.org/10.1038/s41597-020-00776-9DOI Listing

Publication Analysis

Top Keywords

motor control
8
moca dataset
4
dataset kinematic
4
kinematic multi-view
4
multi-view visual
4
visual streams
4
streams fine-grained
4
fine-grained cooking
4
cooking actions
4
actions moca
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!