Mental representations of the environment in infants are sparse and grow richer during their development. Anticipatory eye fixation studies show that infants aged around 7 months start to predict the goal of an observed action, e.g., an object targeted by a reaching hand. Interestingly, goal-predictive gaze shifts occur at an earlier age when the hand subsequently manipulates an object and later when an action is performed by an inanimate actor, e.g., a mechanical claw. We introduce CAPRI2 (Cognitive Action PRediction and Inference in Infants), a computational model that explains this development from a functional, algorithmic perspective. It is based on the theory that infants learn object files and events as they develop a physical reasoning system. In particular, CAPRI2 learns a generative event-predictive model, which it uses to both interpret sensory information and infer goal-directed behavior. When observing object interactions, CAPRI2 (i) interprets the unfolding interactions in terms of event-segmented dynamics, (ii) maximizes the coherence of its event interpretations, updating its internal estimates and (iii) chooses gaze behavior to minimize expected uncertainty. As a result, CAPRI2 mimics the developmental pathway of infants' goal-predictive gaze behavior. Our modeling work suggests that the involved event-predictive representations, longer-term generative model learning, and shorter-term retrospective and active inference principles constitute fundamental building blocks for the effective development of goal-predictive capacities.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500850 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312532 | PLOS |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!