Visual scene recognition is a dynamic process through which incoming sensory information is iteratively compared with predictions regarding the most likely identity of the input stimulus. In this study, we used a novel progressive unfolding task to characterize the accumulation of perceptual evidence prior to scene recognition, and its potential modulation by the emotional valence of these scenes. Our results show that emotional (pleasant and unpleasant) scenes led to slower accumulation of evidence compared to neutral scenes. In addition, when controlling for the potential contribution of non-emotional factors (i.e., familiarity and complexity of the pictures), our results confirm a reliable shift in the accumulation of evidence for pleasant relative to neutral and unpleasant scenes, suggesting a valence-specific effect. These findings indicate that proactive iterations between sensory processing and top-down predictions during scene recognition are reliably influenced by the rapidly extracted (positive) emotional valence of the visual stimuli. We interpret these findings in accordance with the notion of a genuine positivity offset during emotional scene recognition.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3364984 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0038064 | PLOS |
Neural Netw
January 2025
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an, 710054, China; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, 710054, China.
The presence of substantial similarities and redundant information within video data limits the performance of video object recognition models. To address this issue, a Global-Local Storage Enhanced video object recognition model (GSE) is proposed in this paper. Firstly, the model incorporates a two-stage dynamic multi-frame aggregation module to aggregate shallow frame features.
View Article and Find Full Text PDFFront Hum Neurosci
December 2024
Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands.
Introduction: Global Visual Selective Attention (VSA) is the ability to integrate multiple visual elements of a scene to achieve visual overview. This is essential for navigating crowded environments and recognizing objects or faces. Clinical pediatric research on global VSA deficits primarily focuses on autism spectrum disorder (ASD).
View Article and Find Full Text PDFSensors (Basel)
December 2024
Automation Department, North China Electric Power University, Baoding 071003, China.
Aiming at the severe occlusion problem and the tiny-scale object problem in the multi-fitting detection task, the Scene Knowledge Integrating Network (SKIN), including the scene filter module (SFM) and scene structure information module (SSIM) is proposed. Firstly, the particularity of the scene in the multi-fitting detection task is analyzed. Hence, the aggregation of the fittings is defined as the scene according to the professional knowledge of the power field and the habit of the operators in identifying the fittings.
View Article and Find Full Text PDFSensors (Basel)
December 2024
School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, China.
Text recognition is a rapidly evolving task with broad practical applications across multiple industries. However, due to the arbitrary-shape text arrangement, irregular text font, and unintended occlusion of font, this remains a challenging task. To handle images with arbitrary-shape text arrangement and irregular text font, we designed the Discriminative Standard Text Font (DSTF) and the Feature Alignment and Complementary Fusion (FACF).
View Article and Find Full Text PDFResusc Plus
January 2025
Department of Emergency Medicine and Pre-hospital services, St. Olav s University Hospital, NO-7006, Trondheim, Norway.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!