We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments. https://vimeo.com/356859979 http://www.scians.ch/.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7917486PMC
http://dx.doi.org/10.16910/jemr.12.7.11DOI Listing

Publication Analysis

Top Keywords

head movements
24
eye head
16
head
13
eye movement
8
image shape
8
image rotation
8
observers looked
8
scenes virtual
8
saccade directions
8
spatial distribution
8

Similar Publications

Background: High-field magnetic resonance imaging (MRI) is a powerful diagnostic tool but can induce unintended physiological effects, such as nystagmus and dizziness, potentially compromising the comfort and safety of individuals undergoing imaging. These effects likely result from the Lorentz force, which arises from the interaction between the MRI's static magnetic field and electrical currents in the inner ear. Yet, the Lorentz force hypothesis fails to explain observed eye movement patterns in healthy adults fully.

View Article and Find Full Text PDF

Context: Concussion causes physiological disruptions, including disruptions to the vestibular and visual systems, which can cause dizziness, imbalance, and blurry vision. The vestibular ocular reflex functions to maintain a stable visual field, which can be measured using the gaze stability test (GST).

Design: This preliminary study used retrospective chart review to examine changes in GST performance and asymmetry in a sample of 117 youth athletes with concussion (mean age = 14.

View Article and Find Full Text PDF

Introduction: The brainstem vestibular nuclei neurons receive synaptic inputs from inner ear acceleration-sensing hair cells, cerebellar output neurons, and ascending signals from spinal proprioceptive-related neurons. The lateral (LVST) and medial (MVST) vestibulospinal (VS) tracts convey their coded signals to the spinal circuits to rapidly counter externally imposed perturbations to facilitate stability and provide a framework for self-generated head movements.

Methods: The present study describes the morphological characteristics of intraaxonally recorded and labeled VS neurons monosynaptically connected to the 8th nerve.

View Article and Find Full Text PDF

We use our tongue much like our hands: to interact with objects and transport them. For example, we use our hands to sense properties of objects and transport them in the nearby space, and we use our tongue to sense properties of food morsels and transport them through the oral cavity. But what does the cerebellum contribute to control of tongue movements? Here, we trained head-fixed marmosets to make skillful tongue movements to harvest food from small tubes that were placed at sharp angles to their mouth.

View Article and Find Full Text PDF

We explore the efficacy of multimodal behavioral cues for explainable prediction of personality and interview-specific traits. We utilize elementary head-motion units named kinemes, atomic facial movements termed action units and speech features to estimate these human-centered traits. Empirical results confirm that kinemes and action units enable discovery of multiple trait-specific behaviors while also enabling explainability in support of the predictions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!