The effects of visual input on postural control mechanisms: an analysis of center-of-pressure trajectories using the auto-regressive model.

J Hum Ergol (Tokyo)

Department of Human-Computer Interaction Science, Faculty of Technology, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei, Tokyo, 184-8588 Japan.

Published: December 2000

New measures to characterize center-of-pressure (COP) trajectories during quiet standing were proposed and then utilized to investigate changes in postural control with respect to visual input. Eleven healthy male subjects (aged 20-27 years) were included in this study. An instrumented force platform was used to measure the time-varying displacements of the COP under each subject's feet during quiet standing. The subjects were tested under eyes-open and eyes-closed conditions. The COP time series were separately analyzed for the medio-lateral and antero-posterior directions. The proposed measures were obtained from the parameter estimation of auto-regressive (AR) models. The percentage contributions and geometrical moment of AR coefficients showed statistically significant differences between vision conditions. The present COP displacements under the eyes-open condition showed higher correlation with the past COP displacements at longer lag times, when compared to the eyes-closed condition. In contrast, no significant differences between vision conditions were found for conventional summary statistics, e.g., the total length of the COP path. These results suggest that the AR parameters are useful for the evaluation of postural stability and balance function, even for healthy young individuals. The role of visual input in the postural control system and implications of the findings were discussed.

Download full-text PDF

Source

Publication Analysis

Top Keywords

visual input
12
postural control
12
input postural
8
quiet standing
8
conditions cop
8
differences vision
8
vision conditions
8
cop displacements
8
cop
6
effects visual
4

Similar Publications

Background: The use of iodinated contrast-enhancing agents in computed tomography (CT) improves the visualization of relevant structures for radiotherapy treatment planning (RTP). However, it can lead to dose calculation errors by incorrectly converting a CT number to electron density.

Purpose: This study aimed to propose an algorithm for deriving virtual non-contrast (VNC) electron density from dual-energy CT (DECT) data.

View Article and Find Full Text PDF

Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.

View Article and Find Full Text PDF

People with amblyopia show deficits in global motion perception, especially at slow speeds. These observers are also known to have unstable fixation when viewing stationary fixation targets, relative to healthy controls. It is possible that poor fixation stability during motion viewing interferes with the fidelity of the input to motion-sensitive neurons in visual cortex.

View Article and Find Full Text PDF

Background: Food image recognition, a crucial step in computational gastronomy, has diverse applications across nutritional platforms. Convolutional neural networks (CNNs) are widely used for this task due to their ability to capture hierarchical features. However, they struggle with long-range dependencies and global feature extraction, which are vital in distinguishing visually similar foods or images where the context of the whole dish is crucial, thus necessitating transformer architecture.

View Article and Find Full Text PDF

Efficient Multi-Task Training with Adaptive Feature Alignment for Universal Image Segmentation.

Sensors (Basel)

January 2025

Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA.

Universal image segmentation aims to handle all segmentation tasks within a single model architecture and ideally requires only one training phase. To achieve task-conditioned joint training, a task token needs to be used in the multi-task training to condition the model for specific tasks. Existing approaches generate the task token from a text input (e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!