The eye movements we make to look at objects require that the spatial information contained in the object's image on the retina be used to generate a motor command. This process is known as sensorimotor transformation and has been generally addressed using simple point targets. Here, we investigate the sensorimotor transformation involved in planning double saccade sequences directed at one or two objects. Using both visually guided saccades toward stationary objects and objects subjected to intrasaccadic displacements, and memory-guided saccades, we found that the coordinate transformations required to program the second saccade were different for saccades aimed at a new target object and saccades that scanned the same object. While saccades aimed at a new object were updated on the basis of the actual eye position, those that scanned the same object were performed with a fixed amplitude, irrespective of the actual eye position. Our findings demonstrate that different abstract representations of space are used in sensory-to-motor transformations, depending on what action is planned on the objects.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s00221-005-2308-8 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!