Seeing inferences: brain dynamics and oculomotor signatures of non-verbal deduction.

Sci Rep

Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Ramon Trias Fargas, 25-27, 08005, Barcelona, Spain.

Published: February 2023

We often express our thoughts through words, but thinking goes well beyond language. Here we focus on an elementary but basic thinking process, disjunction elimination, elicited by elementary visual scenes deprived of linguistic content, describing its neural and oculomotor correlates. We track two main components of a nonverbal deductive process: the construction of a logical representation (A or B), and its simplification by deduction (not A, therefore B). We identify the network active in the two phases and show that in the latter, but not in the former, it overlaps with areas known to respond to verbal logical reasoning. Oculomotor markers consistently differentiate logical processing induced by the construction of a representation, its simplification by deductive inference, and its maintenance when inferences cannot be drawn. Our results reveal how integrative logical processes incorporate novel experience in the flow of thoughts induced by visual scenes.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9911777PMC
http://dx.doi.org/10.1038/s41598-023-29307-3DOI Listing

Publication Analysis

Top Keywords

visual scenes
8
representation simplification
8
inferences brain
4
brain dynamics
4
dynamics oculomotor
4
oculomotor signatures
4
signatures non-verbal
4
non-verbal deduction
4
deduction express
4
express thoughts
4

Similar Publications

To address the problems that exist in the target detection of vehicle-mounted visual sensors in foggy environments, a vehicle target detection method based on an improved YOLOX network is proposed. Firstly, to address the issue of vehicle target feature loss in foggy traffic scene images, specific characteristics of fog-affected imagery are integrated into the network training process. This not only augments the training data but also improves the robustness of the network in foggy environments.

View Article and Find Full Text PDF

Event-Based Visual/Inertial Odometry for UAV Indoor Navigation.

Sensors (Basel)

December 2024

SOTI Aerospace, SOTI Inc., Mississauga, ON L5N 8L9, Canada.

Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great potential for indoor navigation due to their high dynamic range and low latency.

View Article and Find Full Text PDF

Background: National response time targets for ambulance services are known to be more strongly maintained in urban areas compared to rural. That may mean that responses in rural areas could be less immediate which can in turn affect survival of those experiencing cardiac arrest. Thus, analysis of variation in response times using routinely collected data can be used to understand which rural areas have the highest need for emergency intervention.

View Article and Find Full Text PDF

Visual semantic decoding aims to extract perceived semantic information from the visual responses of the human brain and convert it into interpretable semantic labels. Although significant progress has been made in semantic decoding across individual visual cortices, studies on the semantic decoding of the ventral and dorsal cortical visual pathways remain limited. This study proposed a graph neural network (GNN)-based semantic decoding model on a natural scene dataset (NSD) to investigate the decoding differences between the dorsal and ventral pathways in process various parts of speech, including verbs, nouns, and adjectives.

View Article and Find Full Text PDF

Our visual system enables us to effortlessly navigate and recognize real-world visual environments. Functional magnetic resonance imaging (fMRI) studies suggest a network of scene-responsive cortical visual areas, but much less is known about the temporal order in which different scene properties are analysed by the human visual system. In this study, we selected a set of 36 full-colour natural scenes that varied in spatial structure and semantic content that our male and female human participants viewed both in 2D and 3D while we recorded magnetoencephalography (MEG) data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!