A prevailing question in sensorimotor research is the integration of sensory signals with abstract behavioral rules (contexts) and how this results in decisions about motor actions. We used neural network models to study how context-specific visuomotor remapping may depend on the functional connectivity among multiple layers. Networks were trained to perform different rotational visuomotor associations, depending on the stimulus color (a nonspatial context signal). In network I, the context signal was propagated forward through the network (bottom-up), whereas in network II, it was propagated backwards (top-down). During the presentation of the visual cue stimulus, both networks integrate the context with the sensory information via a mechanism similar to the classic gain field. The recurrence in the networks hidden layers allowed a simulation of the multimodal integration over time. Network I learned to perform the proper visuomotor transformations based on a context-modulated memory of the visual cue in its hidden layer activity. In network II, a brief visual response, which was driven by the sensory input, is quickly replaced by a context-modulated motor-goal representation in the hidden layer. This happens because of a dominant feedback signal from the output layer that first conveys context information, and then, after the disappearance of the visual cue, conveys motor goal information. We also show that the origin of the context information is not necessarily closely tied to the top-down feedback. However, we suggest that the predominance of motor-goal representations found in the parietal cortex during context-specific movement planning might be the consequence of strong top-down feedback originating from within the parietal lobe or from the frontal lobe.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6673148 | PMC |
http://dx.doi.org/10.1523/JNEUROSCI.2685-07.2007 | DOI Listing |
Cognition
January 2025
Institute of Systems and Information Engineering, University of Tsukuba, Ibaraki 305-8573, Japan. Electronic address:
Pain perception is not solely determined by noxious stimuli, but also varies due to other factors, such as beliefs about pain and its uncertainty. A widely accepted theory posits that the brain integrates prediction of pain with noxious stimuli, to estimate pain intensity. This theory assumes that the estimated pain value is adjusted to minimize surprise, mathematically defined as errors between predictions and outcomes.
View Article and Find Full Text PDFThe role of cerebellum in controlling eye movements is well established, but its contribution to more complex forms of visual behavior has remained elusive. To study cerebellar activity during visual attention we recorded extracellular activity of dentate nucleus (DN) neurons in two non-human primates (NHPs). NHPs were trained to read the direction indicated by a peripheral visual stimulus while maintaining fixation at the center, and report the direction of the cue by performing a saccadic eye movement into the same direction following a delay.
View Article and Find Full Text PDFJ Cogn
January 2025
Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, The Netherlands.
Despite pictures being static representations, they use various cues to suggest dynamic motion. To investigate the effectiveness of different motion cues in conveying speed in static images, we conducted 3 experiments. In Experiment 1, we compared subjective speed ratings given for motion lines trailing behind movers, suppletion lines replacing parts of the movers and backfixing lines set in the background against the baseline of having no extra cue.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, 450 Jane Stanford Way, Stanford University, Stanford, CA, USA.
Immaturities exist at multiple levels of the developing human visual pathway, starting with immaturities in photon efficiency and spatial sampling in the retina and on through immaturities in early and later stages of cortical processing. Here we use Steady-State Visual Evoked Potentials (SSVEPs) and controlled visual stimuli to determine the degree to which sensitivity to horizontal retinal disparity is limited by the visibility of the monocular half-images, the ability to encode absolute disparity or the ability to encode relative disparity. Responses were recorded from male and female human participants at average ages of 5.
View Article and Find Full Text PDFPLoS One
January 2025
Graduate School of Humanities and Social Sciences, Kyoto University of Advanced Science, Kyoto, Japan.
The joint Simon effect refers to inhibitory responses to spatially competing stimuli during a complementary task. This effect has been considered to be influenced by the social factors of a partner: sharing stimulus-action representation. According to this account, virtual interactions through their avatars would produce the joint Simon effect even when the partner did not physically exist in the same space because the avatars are intentional agents.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!