New measures to characterize center-of-pressure (COP) trajectories during quiet standing were proposed and then utilized to investigate changes in postural control with respect to visual input. Eleven healthy male subjects (aged 20-27 years) were included in this study. An instrumented force platform was used to measure the time-varying displacements of the COP under each subject's feet during quiet standing. The subjects were tested under eyes-open and eyes-closed conditions. The COP time series were separately analyzed for the medio-lateral and antero-posterior directions. The proposed measures were obtained from the parameter estimation of auto-regressive (AR) models. The percentage contributions and geometrical moment of AR coefficients showed statistically significant differences between vision conditions. The present COP displacements under the eyes-open condition showed higher correlation with the past COP displacements at longer lag times, when compared to the eyes-closed condition. In contrast, no significant differences between vision conditions were found for conventional summary statistics, e.g., the total length of the COP path. These results suggest that the AR parameters are useful for the evaluation of postural stability and balance function, even for healthy young individuals. The role of visual input in the postural control system and implications of the findings were discussed.
Download full-text PDF |
Source |
---|
Med Phys
January 2025
Department of Chemistry, Faculty of Science, Hokkaido University, Sapporo, Hokkaido, Japan.
Background: The use of iodinated contrast-enhancing agents in computed tomography (CT) improves the visualization of relevant structures for radiotherapy treatment planning (RTP). However, it can lead to dose calculation errors by incorrectly converting a CT number to electron density.
Purpose: This study aimed to propose an algorithm for deriving virtual non-contrast (VNC) electron density from dual-energy CT (DECT) data.
Sci Rep
January 2025
Department of Electrical Power, Adama Science and Technology University, Adama, 1888, Ethiopia.
Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada.
People with amblyopia show deficits in global motion perception, especially at slow speeds. These observers are also known to have unstable fixation when viewing stationary fixation targets, relative to healthy controls. It is possible that poor fixation stability during motion viewing interferes with the fidelity of the input to motion-sensitive neurons in visual cortex.
View Article and Find Full Text PDFNutrients
January 2025
Department of Computer Engineering, Inje University, Gimhae 50834, Republic of Korea.
Background: Food image recognition, a crucial step in computational gastronomy, has diverse applications across nutritional platforms. Convolutional neural networks (CNNs) are widely used for this task due to their ability to capture hierarchical features. However, they struggle with long-range dependencies and global feature extraction, which are vital in distinguishing visually similar foods or images where the context of the whole dish is crucial, thus necessitating transformer architecture.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA.
Universal image segmentation aims to handle all segmentation tasks within a single model architecture and ideally requires only one training phase. To achieve task-conditioned joint training, a task token needs to be used in the multi-task training to condition the model for specific tasks. Existing approaches generate the task token from a text input (e.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!