Short-term plasticity of the visuomotor map during grasping movements in humans.

Learn Mem

Physiology Section, Department of Integrative Medical Biology, Umeå University, S-901 87 Umeå, Sweden.

Published: April 2005

During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects of various sizes. The objects seen and the objects grasped were physically never the same. When a discrepancy had been introduced between the size of the visual and the grasped objects, and the subjects were fully adapted to it, they all readily interpolated and extrapolated the MGA to objects not included in training trials. In contrast, when the subjects were exposed to discrepancies that required a slope change in the visuomotor map, they were unable to adapt adequately. They instead retained a subject-specific slope of the relationship between the visual size and MGA. We conclude from these results that during reaching for grasping, normal subjects are unable to abandon a straight linear function determining the relationship between visual object size and MGA. Moreover, the plasticity of the visuomotor map is, at least in short term, constrained to allow only offset changes, that is, only "rigid shifts" are possible between the visual and motor coordinate systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC548498PMC
http://dx.doi.org/10.1101/lm.83005DOI Listing

Publication Analysis

Top Keywords

visuomotor map
16
plasticity visuomotor
12
short-term plasticity
8
grasping movements
8
grasped objects
8
relationship visual
8
size mga
8
visual
5
objects
5
visuomotor
4

Similar Publications

A collicular map for touch-guided tongue control.

Nature

January 2025

Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, USA.

Accurate goal-directed behaviour requires the sense of touch to be integrated with information about body position and ongoing motion. Behaviours such as chewing, swallowing and speech critically depend on precise tactile events on a rapidly moving tongue, but neural circuits for dynamic touch-guided tongue control are unknown. Here, using high-speed videography, we examined three-dimensional lingual kinematics as mice drank from a water spout that unexpectedly changed position during licking, requiring re-aiming in response to subtle contact events on the left, centre or right surface of the tongue.

View Article and Find Full Text PDF

The fundamental prerequisite for embodied agents to make intelligent decisions lies in autonomous cognition. Typically, agents optimize decision-making by leveraging extensive spatiotemporal information from episodic memory. Concurrently, they utilize long-term experience for task reasoning and foster conscious behavioral tendencies.

View Article and Find Full Text PDF

Autism spectrum disorder (ASD) presents a range of challenges, including heightened sensory sensitivities. Here, we examine the idea that sensory overload in ASD may be linked to issues with efference copy mechanisms, which predict the sensory outcomes of self-generated actions, such as eye movements. Efference copies play a vital role in maintaining visual and motor stability.

View Article and Find Full Text PDF

The Visual Systems of Zebrafish.

Annu Rev Neurosci

August 2024

Department of Anatomy and Physiology, School of Biomedical Sciences, The University of Melbourne, Parkville, Victoria, Australia.

The zebrafish visual system has become a paradigmatic preparation for behavioral and systems neuroscience. Around 40 types of retinal ganglion cells (RGCs) serve as matched filters for stimulus features, including light, optic flow, prey, and objects on a collision course. RGCs distribute their signals via axon collaterals to 12 retinorecipient areas in forebrain and midbrain.

View Article and Find Full Text PDF

We propose a machine-learning approach to construct reduced-order models (ROMs) to predict the long-term out-of-sample dynamics of brain activity (and in general, high-dimensional time series), focusing mainly on task-dependent high-dimensional fMRI time series. Our approach is a three stage one. First, we exploit manifold learning and, in particular, diffusion maps (DMs) to discover a set of variables that parametrize the latent space on which the emergent high-dimensional fMRI time series evolve.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!