Recognizing emotional facial expressions is a part of perceptual decision-making processes in the brain. Arriving at a decision for the brain becomes more difficult when available sensory information is limited or ambiguous. We used clear and noisy pictures with happy and angry emotional expressions and asked 32 participants to categorize these pictures based on emotions. There were significant differences in behavioral accuracy and reaction time between the decisions of clear and noisy images. The functional magnetic resonance imaging activations showed that the inferior occipital gyrus (IOG), fusiform gyrus (FG), amygdala (AMG) and ventrolateral prefrontal cortex (VPFC) along with other regions were active during the perceptual decision-making process. Using dynamic causal modeling analysis, we obtained three important results. First, from Bayesian model selection (BMS) approach, we found that the feed-forward network activity was enhanced during the processing of clear and noisy happy faces more than during the processing of clear angry faces. The AMG mediated this feed-forward connectivity in processing of clear and noisy happy faces, whereas the AMG mediation was absent in case of clear angry faces. However, this network activity was enhanced in case of noisy angry faces. Second, connectivity parameters obtained from Bayesian model averaging (BMA) suggested that the forward connectivity dominated over the backward connectivity during such processes. Third, based on the BMA parameters, we found that the easier tasks modulated effective connectivity from IOG to FG, AMG, and VPFC more than the difficult tasks did. These findings suggest that both parallel and hierarchical brain processes are at work during perceptual decision-making of negative, positive, unambiguous and ambiguous emotional expressions, but the AMG-mediated feed-forward network plays a dominant role in such decisions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1089/brain.2013.0145 | DOI Listing |
Sensors (Basel)
December 2024
Faculty of Information Science and Technology, Beijing University of Technology, Beijing 100124, China.
With the increasing complexity of urban roads and rising traffic flow, traffic safety has become a critical societal concern. Current research primarily addresses drivers' attention, reaction speed, and perceptual abilities, but comprehensive assessments of cognitive abilities in complex traffic environments are lacking. This study, grounded in cognitive science and neuropsychology, identifies and quantitatively evaluates ten cognitive components related to driving decision-making, execution, and psychological states by analyzing video footage of drivers' actions.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Physiology, Anatomy and Genetics, University of Oxford.
Limits on information processing capacity impose limits on task performance. We show that male and female mice achieve performance on a perceptual decision task that is near-optimal given their capacity limits, as measured by policy complexity (the mutual information between states and actions). This behavioral profile could be achieved by reinforcement learning with a penalty on high complexity policies, realized through modulation of dopaminergic learning signals.
View Article and Find Full Text PDFNeural Comput
January 2025
Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, BT48 7JL Derry-Londonderry, Northern Ireland, U.K.
Decision formation in perceptual decision making involves sensory evidence accumulation instantiated by the temporal integration of an internal decision variable toward some decision criterion or threshold, as described by sequential sampling theoretical models. The decision variable can be represented in the form of experimentally observable neural activities. Hence, elucidating the appropriate theoretical model becomes crucial to understanding the mechanisms underlying perceptual decision formation.
View Article and Find Full Text PDFCurrent neural network models of primate vision focus on replicating overall levels of behavioral accuracy, often neglecting perceptual decisions' rich, dynamic nature. Here, we introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We describe an approximation that allows us to constrain the number of time steps an RNN takes to solve a task with human RTs.
View Article and Find Full Text PDFFront Psychol
December 2024
Faculty of Systems Information Science, Future University Hakodate, Hakodate, Japan.
Introduction: Effective decision-making in ball games requires the ability to convert positional information from a first-person perspective into a bird's-eye view. To address this need, we developed a virtual reality (VR)-based training system designed to enhance spatial cognition.
Methods: Using a head-mounted virtual reality display, participants engaged in tasks where they tracked multiple moving objects in a virtual space and reproduced their positions from a bird's-eye perspective.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!