Publications by authors named "Radoslaw Cichy"

Visual stimuli compete with each other for cortical processing and attention biases this competition in favor of the attended stimulus. How does the relationship between the stimuli affect the strength of this attentional bias? Here, we used functional MRI to explore the effect of target-distractor similarity in neural representation on attentional modulation in the human visual cortex using univariate and multivariate pattern analyses. Using stimuli from four object categories (human bodies, cats, cars, and houses), we investigated attentional effects in the primary visual area V1, the object-selective regions LO and pFs, the body-selective region EBA, and the scene-selective region PPA.

View Article and Find Full Text PDF

Experience-based plasticity of the human cortex mediates the influence of individual experience on cognition and behavior. The complete loss of a sensory modality is among the most extreme such experiences. Investigating such a selective, yet extreme change in experience allows for the characterization of experience-based plasticity at its boundaries.

View Article and Find Full Text PDF

Visual deprivation does not silence the visual cortex, which is responsive to auditory, tactile, and other nonvisual tasks in blind persons. However, the underlying functional dynamics of the neural networks mediating such crossmodal responses remain unclear. Here, using braille reading as a model framework to investigate these networks, we presented sighted (N=13) and blind (N=12) readers with individual visual print and tactile braille alphabetic letters, respectively, during MEG recording.

View Article and Find Full Text PDF
Article Synopsis
  • Most research on visual system development has concentrated on early stages up to the primary visual cortex (V1), leaving higher visual areas less understood.
  • The typical assumption is that these higher areas mature in a set sequence based on their adult positions, but new evidence suggests this process involves unique network configurations rather than simply being smaller versions of the adult hierarchy.
  • Future studies should adopt a network-level approach to better understand normal development, pinpoint risks for developmental disorders, and create effective treatments.
View Article and Find Full Text PDF

Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos' extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex.

View Article and Find Full Text PDF

Visual imagery and perception share neural machinery but rely on different information flow. While perception is driven by the integration of sensory feedforward and internally generated feedback information, imagery relies on feedback only. This suggests that although imagery and perception may activate overlapping brain regions, they do so in informationally distinctive ways.

View Article and Find Full Text PDF

To navigate through their immediate environment humans process scene information rapidly. How does the cascade of neural processing elicited by scene viewing to facilitate navigational planning unfold over time? To investigate, we recorded human brain responses to visual scenes with electroencephalography and related those to computational models that operationalize three aspects of scene processing (2D, 3D, and semantic information), as well as to a behavioral model capturing navigational affordances. We found a temporal processing hierarchy: navigational affordance is processed later than the other scene features (2D, 3D, and semantic) investigated.

View Article and Find Full Text PDF

To create coherent visual experiences, the brain spatially integrates the complex and dynamic information it receives from the environment. We previously demonstrated that feedback-related alpha activity carries stimulus-specific information when two spatially and temporally coherent naturalistic inputs can be integrated into a unified percept. In this study, we sought to determine whether such integration-related alpha dynamics are triggered by categorical coherence in visual inputs.

View Article and Find Full Text PDF

According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants.

View Article and Find Full Text PDF

Communicative signals such as eye contact increase infants' brain activation to visual stimuli and promote joint attention. Our study assessed whether communicative signals during joint attention enhance infant-caregiver dyads' neural responses to objects, and their neural synchrony. To track mutual attention processes, we applied rhythmic visual stimulation (RVS), presenting images of objects to 12-month-old infants and their mothers (n = 37 dyads), while we recorded dyads' brain activity (i.

View Article and Find Full Text PDF
Article Synopsis
  • The brain creates clear visual experiences by combining sensory inputs spread across our visual field during naturalistic vision.
  • Researchers studied this integration process using EEG and fMRI while changing the coherence of videos shown to participants, discovering that different brain activities (gamma for incoherent and alpha for coherent stimuli) reflect this processing.
  • Their findings suggest that rhythmic activity in the brain's feedback mechanisms is crucial for building coherent visual perceptions, influencing how visual information is processed from early stages to higher-level analysis.
View Article and Find Full Text PDF

Humans effortlessly make quick and accurate perceptual decisions about the nature of their immediate visual environment, such as the category of the scene they face. Previous research has revealed a rich set of cortical representations potentially underlying this feat. However, it remains unknown which of these representations are suitably formatted for decision-making.

View Article and Find Full Text PDF

Visual stimuli compete with each other for cortical processing and attention biases this competition in favor of the attended stimulus. How does the relationship between the stimuli affect the strength of this attentional bias? Here, we used functional MRI to explore the effect of target-distractor similarity in neural representation on attentional modulation in the human visual cortex using univariate and multivariate pattern analyses. Using stimuli from four object categories (human bodies, cats, cars and houses), we investigated attentional effects in the primary visual area V1, the object-selective regions LO and pFs, the body-selective region EBA, and the scene-selective region PPA.

View Article and Find Full Text PDF

Spatial attention helps us to efficiently localize objects in cluttered environments. However, the processing stage at which spatial attention modulates object location representations remains unclear. Here we investigated this question identifying processing stages in time and space in an EEG and fMRI experiment respectively.

View Article and Find Full Text PDF

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing.

View Article and Find Full Text PDF

Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both.

View Article and Find Full Text PDF

Visual categorization is a human core cognitive capacity that depends on the development of visual category representations in the infant brain. However, the exact nature of infant visual category representations and their relationship to the corresponding adult form remains unknown. Our results clarify the nature of visual category representations from electroencephalography (EEG) data in 6- to 8-month-old infants and their developmental trajectory toward adult maturity in the key characteristics of temporal dynamics, representational format, and spectral properties.

View Article and Find Full Text PDF

The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition.

View Article and Find Full Text PDF
Article Synopsis
  • Distinguishing between animate and inanimate objects is crucial for behavior, and this study explores the specific properties that influence brain responses and judgment.
  • Researchers examined five key dimensions related to animacy—being alive, looking like an animal, having agency, having mobility, and being unpredictable—using brain imaging (fMRI, EEG) and various judgment tasks on 19 participants.
  • While all dimensions significantly influenced behavior and brain activity, the dimension "being alive" surprisingly did not contribute to brain representations, suggesting different brain regions may process these properties differently for recognizing animacy.
View Article and Find Full Text PDF

Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis.

View Article and Find Full Text PDF

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals.

View Article and Find Full Text PDF

Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS.

View Article and Find Full Text PDF

To interact with objects in complex environments, we must know what they are and where they are in spite of challenging viewing conditions. Here, we investigated where, how and when representations of object location and category emerge in the human brain when objects appear on cluttered natural scene images using a combination of functional magnetic resonance imaging, electroencephalography and computational models. We found location representations to emerge along the ventral visual stream towards lateral occipital complex, mirrored by gradual emergence in deep neural networks.

View Article and Find Full Text PDF

conceptual representations are critical for human cognition. Despite their importance, key properties of these representations remain poorly understood. Here, we used computational models of distributional semantics to predict multivariate fMRI activity patterns during the activation and contextualization of abstract concepts.

View Article and Find Full Text PDF

During natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/400 differences are indicative of changes in the cortical representation of objects.

View Article and Find Full Text PDF