The causes of the interindividual differences (IDs) in how we perceive and control spatial orientation are poorly understood. Here, we propose that IDs partly reflect preferred modes of spatial referencing and that these preferences or "styles" are maintained from the level of spatial perception to that of motor control. Two groups of experimental subjects, one with high visual field dependency (FD) and one with marked visual field independency (FI) were identified by the Rod and Frame Test, which identifies relative dependency on a visual frame of reference (VFoR). FD and FI subjects were tasked with standing still in conditions of increasing postural difficulty while visual cues of self-orientation (a visual frame tilted in roll) and self-motion (in stroboscopic illumination) were varied and in darkness to assess visual dependency. Postural stability, overall body orientation and modes of segmental stabilization relative to either external (space) or egocentric (adjacent segments) frames of reference in the roll plane were analysed. We hypothesized that a moderate challenge to balance should enhance subjects' reliance on VFoR, particularly in FD subjects, whereas a substantial challenge should constrain subjects to use a somatic-vestibular based FoR to prevent falling in which case IDs would vanish. The results showed that with increasing difficulty, FD subjects became more unstable and more disoriented shown by larger effects of the tilted visual frame on posture. Furthermore, their preference to coalign body/VFoR coordinate systems lead to greater fixation of the head-trunk articulation and stabilization of the hip in space, whereas the head and trunk remained more stabilized in space with the hip fixed on the leg in FI subjects. These results show that FD subjects have difficulties at identifying and/or adopting a more appropriate FoR based on proprioceptive and vestibular cues to regulate the coalignment of posturo/exocentric FoRs. The FI subjects' resistance in the face of altered VFoR and balance challenge resides in their greater ability to coordinate movement by coaligning body axes with more appropriate FoRs (provided by proprioceptive and vestibular co-variance).

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroscience.2010.05.072DOI Listing

Publication Analysis

Top Keywords

visual frame
12
frames reference
8
visual field
8
vfor subjects
8
proprioceptive vestibular
8
subjects
7
visual
7
individual differences
4
differences ability
4
ability identify
4

Similar Publications

Background: Self-management is regarded as a crucial factor influencing the effectiveness of home-based cardiac rehabilitation for patients with coronary heart disease. In nursing practice, nurses employ a variety of strategies to enhance self-management of patients. However, there exists a disparity in nurses' perceptions and practical experiences with these strategies.

View Article and Find Full Text PDF

Confidence-Guided Frame Skipping to Enhance Object Tracking Speed.

Sensors (Basel)

December 2024

School of Software, Kwangwoon University, Kwangwoon-ro 20, Nowon-gu, Seoul 01897, Republic of Korea.

Object tracking is a challenging task in computer vision. While simple tracking methods offer fast speeds, they often fail to track targets. To address this issue, traditional methods typically rely on complex algorithms.

View Article and Find Full Text PDF

Amidst the backdrop of the profound synergy between navigation and visual perception, there is an urgent demand for accurate real-time vehicle positioning in urban environments. However, the existing global navigation satellite system (GNSS) algorithms based on Kalman filters fall short of precision. In response, we introduce an elastic filtering algorithm with visual perception for vehicle GNSS navigation and positioning.

View Article and Find Full Text PDF

Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder.

View Article and Find Full Text PDF

Body-Related Visual Biasing Affects Accuracy of Reaching.

Brain Sci

December 2024

SensoriMotorLab, Department of Ophthalmology-University of Lausanne, Jules Gonin Eye Hospital-Fondation Asile des Aveugles, 1004 Lausanne, Switzerland.

Many daily activities depend on visual inputs to improve motor accuracy and minimize errors. Reaching tasks present an ecological framework for examining these visuomotor interactions, but our comprehension of how different amounts of visual input affect motor outputs is still limited. The present study fills this gap, exploring how hand-related visual bias affects motor performance in a reaching task (to draw a line between two dots).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!