How we learn to interact with and understand our environment for the first time is an age-old philosophical question. Scientists have long sought to understand what is the origin of egocentric spatial localization and the perceptual integration of touch and visual information. It is difficult to study the beginnings of intermodal visual-motor and visual-tactile linkages in early infancy since infants' muscular strength and control cannot accurately guide visual-motor behavior and they do not concentrate well [1-6]. Alternatively, one can examine young children who have a restored congenital sensory modality loss. They are the best infant substitute if they are old enough for good muscle control and young enough to be within the classic critical period for neuroplasticity [7, 8]. Recovery studies after removal of dense congenital cataracts are examples of this, but most are performed on older subjects [9-14]. We report here the results of video-recorded experiments on a congenitally blind child, beginning immediately after surgical restoration of vision. Her remarkably rapid development of accurate reaching and grasping showed that egocentric spatial localization requires neural circuitry needing less than a half hour of spatially informative experience to be calibrated. 32 hr after first sight, she visually recognized an object that she had simultaneously looked at and held, even though she could not use single senses alone (vision to vision; touch to touch) to perform this recognition until the following day. Then she also performed intersensory transfer of tactile object experience to visual object recognition, demonstrating that the two senses are prearranged to immediately become calibrated to one another.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cub.2016.02.065 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!