Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision.
View Article and Find Full Text PDFDifferent contexts require us either to react immediately, or to delay (or suppress) a planned movement. Previous studies that aimed at decoding movement plans typically dissociated movement preparation and execution by means of delayed-movement paradigms. Here we asked whether these results can be generalized to the planning and execution of immediate movements.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
December 2017
Incoming sensory input is condensed by our perceptual system to optimally represent and store information. In the temporal domain, this process has been described in terms of temporal windows (TWs) of integration/segregation, in which the phase of ongoing neural oscillations determines whether two stimuli are integrated into a single percept or segregated into separate events. However, TWs can vary substantially, raising the question of whether different TWs map onto unique oscillations or, rather, reflect a single, general fluctuation in cortical excitability (e.
View Article and Find Full Text PDFHow do humans recognize humans among other creatures? Recent studies suggest that a preference for conspecifics may emerge already in perceptual processing, in regions such as the right posterior superior temporal sulcus (pSTS), implicated in visual perception of biological motion. In the current functional MRI study, participants viewed point-light displays of human and nonhuman creatures moving in their typical bipedal (man and chicken) or quadrupedal mode (crawling-baby and cat). Stronger activity for man and chicken versus baby and cat was found in the right pSTS responsive to biological motion.
View Article and Find Full Text PDFHumans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy.
View Article and Find Full Text PDFUnlabelled: The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set.
View Article and Find Full Text PDFRecent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method.
View Article and Find Full Text PDFTo be able to interact with our environment, we need to transform incoming sensory information into goal-directed motor outputs. Whereas our ability to plan an appropriate movement based on sensory information appears effortless and simple, the underlying brain dynamics are still largely unknown. Here we used magnetoencephalography (MEG) to investigate this issue by recording brain activity during the planning of non-visually guided reaching and grasping actions, performed with either the left or right hand.
View Article and Find Full Text PDFUnlabelled: Understanding other people's actions is a fundamental prerequisite for social interactions. Whether action understanding relies on simulating the actions of others in the observers' motor system or on the access to conceptual knowledge stored in nonmotor areas is strongly debated. It has been argued previously that areas that play a crucial role in action understanding should (1) distinguish between different actions, (2) generalize across the ways in which actions are performed (Dinstein et al.
View Article and Find Full Text PDFMajor theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy.
View Article and Find Full Text PDFCook et al. overstate the evidence supporting their associative account of mirror neurons in humans: most studies do not address a key property, action-specificity that generalizes across the visual and motor domains. Multivariate pattern analysis (MVPA) of neuroimaging data can address this concern, and we illustrate how MVPA can be used to test key predictions of their account.
View Article and Find Full Text PDFStudies investigating the role of oscillatory activity in sensory perception are primarily conducted in the visual domain, while the contribution of oscillatory activity to auditory perception is heavily understudied. The objective of the present study was to investigate macroscopic (EEG) oscillatory brain response patterns that contribute to an auditory (Zwicker tone, ZT) illusion. Three different analysis approaches were chosen: 1) a parametric variation of the ZT illusion intensity via three different notch widths of the ZT-inducing noise; 2) contrasts of high-versus-low-intensity ZT illusion trials, excluding physical stimuli differences; 3) a representational similarity analysis to relate source activity patterns to loudness ratings.
View Article and Find Full Text PDFThe notion of a frontoparietal human mirror neuron system (HMNS) has been used to explain a range of social phenomena. However, most human neuroimaging studies of this system do not address critical 'mirror' properties: neural representations should be action specific and should generalise across visual and motor modalities. Studies using repetition suppression (RS) and, particularly, multivariate pattern analysis (MVPA) highlight the contribution to action perception of anterior parietal regions.
View Article and Find Full Text PDFPeople rapidly form impressions from facial appearance, and these impressions affect social decisions. We argue that data-driven, computational models are the best available tools for identifying the source of such impressions. Here we validate seven computational models of social judgments of faces: attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness.
View Article and Find Full Text PDFAn important human capacity is the ability to imagine performing an action, and its consequences, without actually executing it. Here we seek neural representations of specific manual actions that are common across visuo-motor performance and imagery. Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials.
View Article and Find Full Text PDFThe discovery of mirror neurons-neurons that code specific actions both when executed and observed-in area F5 of the macaque provides a potential neural mechanism underlying action understanding. To date, neuroimaging evidence for similar coding of specific actions across the visual and motor modalities in human ventral premotor cortex (PMv)-the putative homologue of macaque F5-is limited to the case of actions observed from a first-person perspective. However, it is the third-person perspective that figures centrally in our understanding of the actions and intentions of others.
View Article and Find Full Text PDFHow is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers, and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions but was not specialized for category.
View Article and Find Full Text PDFMotivation improves the efficiency of intentional behavior, but how this performance modulation is instantiated in the human brain remains unclear. We used a reward-cued antisaccade paradigm to investigate how motivational goals (the expectation of a reward for good performance) modulate patterns of neural activation and functional connectivity to improve preparation for antisaccade performance. Behaviorally, subjects performed better (faster and more accurate antisaccades) when they knew they would be rewarded for good performance.
View Article and Find Full Text PDFIn two fMRI experiments (n = 44) using tasks with different demands-approach-avoidance versus one-back recognition decisions-we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N.
View Article and Find Full Text PDFFor functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been shown to be a sensitive method to detect areas that encode certain stimulus dimensions. By moving a searchlight through the volume of the brain, one can continuously map the information content about the experimental conditions of interest to the brain. Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface.
View Article and Find Full Text PDFMany lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.
View Article and Find Full Text PDFUsing a composite-face paradigm, we show that social judgments from faces rely on holistic processing. Participants judged facial halves more positively when aligned with trustworthy than with untrustworthy halves, despite instructions to ignore the aligned parts (experiment 1). This effect was substantially reduced when the faces were inverted (experiments 2 and 3) and when the halves were misaligned (experiment 3).
View Article and Find Full Text PDFPerception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer's attention toward the indicated location.
View Article and Find Full Text PDFUsing a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors tested the hypothesis that perceptions of trustworthiness are related to these expressions. Although the same emotional intensity was added to both trustworthy and untrustworthy faces, trustworthy faces who expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces who expressed anger were perceived as angrier than trustworthy faces. The authors also manipulated changes in face trustworthiness simultaneously with the change in expression.
View Article and Find Full Text PDFJudgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2).
View Article and Find Full Text PDF