To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e.
View Article and Find Full Text PDFSocial class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown.
View Article and Find Full Text PDFCommunicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior. For example, attack often follows signals of intense aggression if receivers fail to retreat. Humans regularly use facial expressions to communicate such information.
View Article and Find Full Text PDFPrediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization. However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories-e.
View Article and Find Full Text PDFModels of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant ( = 11, both sexes).
View Article and Find Full Text PDFModels are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias.
View Article and Find Full Text PDFTo date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales.
View Article and Find Full Text PDFTrends Cogn Sci
December 2022
Deep neural networks (DNNs) have become powerful and increasingly ubiquitous tools to model human cognition, and often produce similar behaviors. For example, with their hierarchical, brain-inspired organization of computations, DNNs apparently categorize real-world images in the same way as humans do. Does this imply that their categorization algorithms are also similar? We have framed the question with three embedded degrees that progressively constrain algorithmic similarity evaluations: equivalence of (i) behavioral/brain responses, which is current practice, (ii) the stimulus features that are processed to produce these outcomes, which is more constraining, and (iii) the algorithms that process these shared features, the ultimate goal.
View Article and Find Full Text PDFExperimental studies in cognitive science typically focus on the population average effect. An alternative is to test each individual participant and then quantify the proportion of the population that would show the effect: the prevalence, or participant replication probability. We argue that this approach has conceptual and practical advantages.
View Article and Find Full Text PDFA key challenge in neuroimaging remains to understand where, when, and now particularly human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs.
View Article and Find Full Text PDFHuman facial expressions are complex, multi-component signals that can communicate rich information about emotions, including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal." An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.
View Article and Find Full Text PDFDeep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces).
View Article and Find Full Text PDFWithin neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST.
View Article and Find Full Text PDFFacial attractiveness confers considerable advantages in social interactions, with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average while optimizing sexual dimorphism. However, emerging evidence questions this model as an accurate representation of facial attractiveness, including representing the diversity of beauty preferences within and across cultures.
View Article and Find Full Text PDFA longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as 'melted'.
View Article and Find Full Text PDFAction video game players (AVGPs) display superior performance in various aspects of cognition, especially in perception and top-down attention. The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
May 2020
The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge).
View Article and Find Full Text PDFFast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG).
View Article and Find Full Text PDFCurrent cognitive theories are cast in terms of information-processing mechanisms that use mental representations. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them.
View Article and Find Full Text PDFOver the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g.
View Article and Find Full Text PDFIntegration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction.
View Article and Find Full Text PDFPrimate brains and state-of-the-art convolutional neural networks can recognize many faces, objects and scenes, though how they do so is often mysterious. New research unveils some of the mystery, revealing unexpected complexity in the recognition strategies of rodents.
View Article and Find Full Text PDFA smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles.
View Article and Find Full Text PDF