Sensory representations in primary visual cortex are not sufficient for subjective imagery.

Curr Biol

School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK.

Published: November 2024

AI Article Synopsis

  • The modern concept of mental imagery involves sensory representations that mimic perception without being derived from it, along with a personal experience tied to that imagery.
  • Neuroimaging studies reveal that these sensory representations occur in the primary visual cortex (V1) and display similarities to actual perception.
  • Research comparing individuals who can visualize (visualizers) and those who cannot (aphantasics) found that while V1 can still decode sound content in aphantasics during passive listening, it fails during voluntary imagery, indicating a distinction between sensory representations and subjective experiences.

Article Abstract

The contemporary definition of mental imagery is characterized by two aspects: a sensory representation that resembles, but does not result from, perception, and an associated subjective experience. Neuroimaging demonstrated imagery-related sensory representations in primary visual cortex (V1) that show striking parallels to perception. However, it remains unclear whether these representations always reflect subjective experience or if they can be dissociated from it. We addressed this question by comparing sensory representations and subjective imagery among visualizers and aphantasics, the latter with an impaired ability to experience imagery. Importantly, to test for the presence of sensory representations independently of the ability to generate imagery on demand, we examined both spontaneous and voluntary imagery forms. Using multivariate fMRI, we tested for decodable sensory representations in V1 and subjective visual imagery reports that occurred either spontaneously (during passive listening of evocative sounds) or in response to the instruction to voluntarily generate imagery of the sound content (always while blindfolded inside the scanner). Among aphantasics, V1 decoding of sound content was at chance during voluntary imagery, and lower than in visualizers, but it succeeded during passive listening, despite them reporting no imagery. In contrast, in visualizers, decoding accuracy in V1 was greater in voluntary than spontaneous imagery (while being positively associated with the reported vividness of both imagery types). Finally, for both conditions, decoding in precuneus was successful in visualizers but at chance for aphantasics. Together, our findings show that V1 representations can be dissociated from subjective imagery, while implicating a key role of precuneus in the latter.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2024.09.062DOI Listing

Publication Analysis

Top Keywords

sensory representations
20
imagery
13
subjective imagery
12
representations primary
8
primary visual
8
visual cortex
8
subjective experience
8
representations subjective
8
generate imagery
8
voluntary imagery
8

Similar Publications

Neural representations for visual stimuli typically emerge with a bilateral distribution across occipitotemporal cortex (OTC)? Pediatric patients undergoing unilateral OTC resection offer an opportunity to evaluate whether representations for visual stimulus individuation can sufficiently develop in a single OTC. Here, we assessed the non-resected hemisphere of patients with pediatric resection within ( = 9) and outside ( = 12) OTC, as well as healthy controls' two hemispheres ( = 21). Using functional magnetic resonance imaging, we mapped category selectivity (CS), and representations for visual stimulus individuation (for faces, objects, and words) with repetition suppression (RS).

View Article and Find Full Text PDF

Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.

View Article and Find Full Text PDF

Integrating spatial and temporal information is essential for our sensory experience. While psychophysical evidence suggests spatial dependencies in duration perception, few studies have directly tested the neural link between temporal and spatial processing. Using ultra-high-field functional MRI and neuronal-based modeling, we investigated how and where the processing and the representation of a visual stimulus duration is linked to that of its spatial location.

View Article and Find Full Text PDF

Beyond awareness: the binding of reflexive mechanisms with the conscious mind: a perspective from default space theory.

Front Hum Neurosci

December 2024

Charitable Medical Healthcare Foundation, Augusta, GA, United States.

How do reflexes operate so quickly with so much multimodal information on the environment? How might unconscious processes help reveal the nature of consciousness? The Default Space Theory of Consciousness (DST) offers a novel way to interpret these questions by describing how sensory inputs, cognitive functions, emotional states, and unconscious processes are integrated by a single unified internal representation. Recent developments in neuroimaging and electrophysiology, such as fMRI, EEG, and MEG, have improved our knowledge of the brain mechanisms that underpin the conscious mind and have highlighted the importance of neural oscillations and sensory integration in its formation. In this article, we put forth a perspective on an underresearched relationship of reflexes with the dynamic character of consciousness and suggest that future research should focus on the interplay of the unconscious processes of reflexes and correlates of the contents of consciousness to better understand its nature.

View Article and Find Full Text PDF

Variational autoencoders (VAEs) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!