Explorative eye movements specifically target some parts of a scene while ignoring others. Here, we investigate how local image structure--defined by spatial frequency contrast--and informative image content--defined by higher order image statistics-are weighted for the selection of fixation points. We measured eye movements of macaque monkeys freely viewing a set of natural and manipulated images outside a particular task. To probe the effect of scene content, we locally introduced patches of pink noise into natural images, and to probe the interaction with image structure, we altered the contrast of the noise. We found that fixations specifically targeted the natural image parts and spared the uninformative noise patches. However, both increasing and decreasing the contrast of the noise attracted more fixations, and, in the extreme cases, compensated the effect of missing content. Introducing patches from another natural image led to similar results. In all paradigms tested, the interaction between scene structure and informative scene content was the same in any of the first six fixations on an image, demonstrating that the weighting of these factors is constant during viewing of an image. These results question theories, which suggest that initial fixations are driven by stimulus structure whereas later fixations are determined by informative scene content.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.visres.2006.02.003DOI Listing

Publication Analysis

Top Keywords

scene content
12
image
10
interaction image
8
image structure
8
eye movements
8
contrast noise
8
natural image
8
informative scene
8
fixations
6
content
5

Similar Publications

Visual semantic decoding aims to extract perceived semantic information from the visual responses of the human brain and convert it into interpretable semantic labels. Although significant progress has been made in semantic decoding across individual visual cortices, studies on the semantic decoding of the ventral and dorsal cortical visual pathways remain limited. This study proposed a graph neural network (GNN)-based semantic decoding model on a natural scene dataset (NSD) to investigate the decoding differences between the dorsal and ventral pathways in process various parts of speech, including verbs, nouns, and adjectives.

View Article and Find Full Text PDF

Our visual system enables us to effortlessly navigate and recognize real-world visual environments. Functional magnetic resonance imaging (fMRI) studies suggest a network of scene-responsive cortical visual areas, but much less is known about the temporal order in which different scene properties are analysed by the human visual system. In this study, we selected a set of 36 full-colour natural scenes that varied in spatial structure and semantic content that our male and female human participants viewed both in 2D and 3D while we recorded magnetoencephalography (MEG) data.

View Article and Find Full Text PDF

When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea.

View Article and Find Full Text PDF

Inferring the ancestral origin of DNA evidence recovered from crime scenes is crucial in forensic investigations, especially in the absence of a direct suspect match. Ancestry informative markers (AIMs) have been widely researched and commercially developed into panels targeting multiple continental regions. However, existing forensic ancestry inference panels typically group East Asian individuals into a homogenous category without further differentiation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!