This study explores the influence of test repetition on bipodal visually controlled balance, both static and dynamic. Our goal was to get an insight into the pattern of changes in posture maintenance results during repeated balance tests. Fifteen young, healthy male recreational athletes were tested for static and for dynamic balance using KAT 2000 balance platform. The subjects first performed three trial tests of static and dynamic balance to get used to the platform followed by seven repetitions of static as well as dynamic test which were recorded. During the repeated tests we could not determine any significant improvements of static balance test resulting from number of test repetitions neither in static nor in dynamic balance (Friedman ANOVA: Static balance p = 0.497, Dynamic balance p = 0.393). Correlating static and dynamic balance results we found that only one third of the dynamic balance was related to static balance abilities (r2 = 0.36). Possible patterns in front-back and left-right directions were analyzed as well, however, none of these balance scores were found to be related to the number of repetitions. In conclusion, this study found no significant influence of limited number of repetitions (seven) on test results in static and dynamic posture. However, as large number of repetitions might still influence test results we discourage the use of KAT 2000 as a training tool in patients in which it will be used as an instrument to validate postoperative rehabilitation or investigation results.

Download full-text PDF

Source

Publication Analysis

Top Keywords

static dynamic
28
dynamic balance
28
balance
14
influence test
12
static balance
12
number repetitions
12
static
11
dynamic
10
test repetition
8
repetition bipodal
8

Similar Publications

Track Deflection Monitoring for Railway Construction Based on Dynamic Brillouin Optical Time-Domain Reflectometry.

Sensors (Basel)

December 2024

Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510630, China.

Real-time online monitoring of track deformation during railway construction is crucial for ensuring the safe operation of trains. However, existing monitoring technologies struggle to effectively monitor both static and dynamic events, often resulting in high false alarm rates. This paper presents a monitoring technology for track deformation during railway construction based on dynamic Brillouin optical time-domain reflectometry (Dy-BOTDR), which effectively meets requirements in the monitoring of both static and dynamic events of track deformation.

View Article and Find Full Text PDF

Flexible high-deflection strain gauges have been demonstrated to be cost-effective and accessible sensors for capturing human biomechanical deformations. However, the interpretation of these sensors is notably more complex compared to conventional strain gauges, particularly during dynamic motion. In addition to the non-linear viscoelastic behavior of the strain gauge material itself, the dynamic response of the sensors is even more difficult to capture due to spikes in the resistance during strain path changes.

View Article and Find Full Text PDF

Weigh-in-motion (WIM) systems aim to estimate a vehicle's weight by measuring static wheel loads as it passes at highway speed over roadway-embedded sensors. Vehicle oscillations and the resulting dynamic load components are critical factors affecting measurements and limiting accuracy. As of now, a satisfactory solution has yet to be found.

View Article and Find Full Text PDF

Current deep learning-based phase unwrapping techniques for iToF Lidar sensors focus mainly on static indoor scenarios, ignoring motion blur in dynamic outdoor scenarios. Our paper proposes a two-stage semi-supervised method to unwrap ambiguous depth maps affected by motion blur in dynamic outdoor scenes. The method trains on static datasets to learn unwrapped depth map prediction and then adapts to dynamic datasets using continuous learning methods.

View Article and Find Full Text PDF

Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!