Cortical areas that directly receive sensory inputs from the thalamus were long thought to be exclusively dedicated to a single modality, originating separate labeled lines. In the past decade, however, several independent lines of research have demonstrated cross-modal responses in primary sensory areas. To investigate whether these responses represent behaviorally relevant information, we carried out neuronal recordings in the primary somatosensory cortex (S1) and primary visual cortex (V1) of rats as they performed whisker-based tasks in the dark. During the free exploration of novel objects, V1 and S1 responses carried comparable amounts of information about object identity. During execution of an aperture tactile discrimination task, tactile recruitment was slower and less robust in V1 than in S1. However, V1 tactile responses correlated significantly with performance across sessions. Altogether, the results support the notion that primary sensory areas have a preference for a given modality but can engage in meaningful cross-modal processing depending on task demand.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3174625PMC
http://dx.doi.org/10.1073/pnas.1102780108DOI Listing

Publication Analysis

Top Keywords

cross-modal responses
8
responses primary
8
primary visual
8
visual cortex
8
tactile discrimination
8
primary sensory
8
sensory areas
8
primary
5
cortex encode
4
encode complex
4

Similar Publications

Overview and Prospects of DNA Sequence Visualization.

Int J Mol Sci

January 2025

School of Mathematics and Computer Science, Gannan Normal University, Ganzhou 341000, China.

Due to advances in big data technology, deep learning, and knowledge engineering, biological sequence visualization has been extensively explored. In the post-genome era, biological sequence visualization enables the visual representation of both structured and unstructured biological sequence data. However, a universal visualization method for all types of sequences has not been reported.

View Article and Find Full Text PDF

Physiological Responses to Aversive and Non-aversive Audiovisual, Audio, and Visual Stimuli.

Biol Psychol

January 2025

Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC 29201, USA. Electronic address:

We examined differences in physiological responses to aversive and non-aversive naturalistic audiovisual stimuli and their auditory and visual components within the same experiment. We recorded five physiological measures that have been shown to be sensitive to affect: electrocardiogram, electromyography (EMG) for zygomaticus major and corrugator supercilii muscles, electrodermal activity (EDA), and skin temperature. Valence and arousal ratings confirmed that aversive stimuli were more negative in valence and higher in arousal than non-aversive stimuli.

View Article and Find Full Text PDF

Audiovisual associative memory and audiovisual integration involve common behavioral processing components and significantly overlap in their neural mechanisms. This suggests that training on audiovisual associative memory may have the potential to improve audiovisual integration. The current study tested this hypothesis by applying a 2 (group: audiovisual training group, unimodal control group) * 2 (time: pretest, posttest) design.

View Article and Find Full Text PDF

Do goats recognise humans cross-modally?

PeerJ

January 2025

Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, Hong Kong SAR, China.

Recognition plays a key role in the social lives of gregarious species, enabling animals to distinguish among social partners and tailor their behaviour accordingly. As domesticated animals regularly interact with humans, as well as members of their own species, we might expect mechanisms used to discriminate between conspecifics to also apply to humans. Given that goats can combine visual and vocal cues to recognise one another, we investigated whether this cross-modal recognition extends to discriminating among familiar humans.

View Article and Find Full Text PDF

Temporal Multi-Modal Knowledge Graphs (TMMKGs) can be regarded as a synthesis of Temporal Knowledge Graphs (TKGs) and Multi-Modal Knowledge Graphs (MMKGs), combining the characteristics of both. TMMKGs can effectively model dynamic real-world phenomena, particularly in scenarios involving multiple heterogeneous information sources and time series characteristics, such as e-commerce websites, scene recording data, and intelligent transportation systems. We propose a Temporal Multi-Modal Knowledge Graph Generation (TMMKGG) method that can automatically construct TMMKGs, aiming to reduce construction costs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!