Infants' looking behaviors are often used for measuring attention, real-time processing, and learning-often using low-resolution videos. Despite the ubiquity of gaze-related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real-time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low-resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open-source repository at https://github.com/yoterel/iCatcher.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9320879PMC
http://dx.doi.org/10.1111/infa.12468DOI Listing

Publication Analysis

Top Keywords

neural network
8
low-resolution videos
8
icatcher neural
4
network approach
4
approach automated
4
automated coding
4
coding young
4
young children's
4
children's eye
4
eye movements
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!