This study investigated the trainability of decision-making and reactive agility via video-based visual training in young athletes. Thirty-four members of a national football academy (age: 14.4 ± 0.1 years) were randomly assigned to a training (VIS; 18) or a control group (CON; 16). In addition to the football training, the VIS completed a video-based visual training twice a week over a period of six weeks during the competition phase. Using the temporal occlusion technique, the players were instructed to react on one-on-one situations shown in 40 videos. The number of successful decisions and the response time were measured with a video-based test. In addition, the reactive-agility sprint test was used. VIS significantly improved the number of successful decisions (22.2 ± 3.6 s 29.8 ± 4.5 s; < 0.001), response time (0.41 ± 0.10 s 0.31 ± 0.10 s; 0.006) and reactive agility (2.22 ± 0.33 s 1.94 ± 0.11 s; 0.001) pre- post-training. No significant differences were found for CON. The results have shown that video-based visual training improves the time to make decisions as well as reactive agility sprint-time, accompanied by an increase in successful decisions. It remains to be shown whether or not such training can improve simulated or actual game performance.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5968940PMC
http://dx.doi.org/10.3390/sports4010001DOI Listing

Publication Analysis

Top Keywords

video-based visual
16
visual training
16
reactive agility
16
successful decisions
12
decision-making reactive
8
training vis
8
number successful
8
response time
8
training
7
effects video-based
4

Similar Publications

Objectives: This study aimed to develop an automated skills assessment tool for surgical trainees using deep learning.

Background: Optimal surgical performance in robot-assisted surgery (RAS) is essential for ensuring good surgical outcomes. This requires effective training of new surgeons, which currently relies on supervision and skill assessment by experienced surgeons.

View Article and Find Full Text PDF

Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers.

Behav Res Methods

January 2025

Department Neurophysics, Philipps-Universität Marburg, Fachbereich Physik, AG Neurophysik, Karl-Von-Frisch-Straße 8a, 35043, Marburg, Lahnberge, Germany.

The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e.

View Article and Find Full Text PDF

Senior police officers' tactical gaze control and visual attention improve with an individual video-based police firearms training. To validate the efficacy of said intervention training, a previous experiment was systematically replicated with a sample of = 52 second-year police cadets. Participants were randomly assigned to the intervention training that focused on situational awareness, tactical gaze control, and visual attention, or an active control training that addressed traditional marksmanship skills.

View Article and Find Full Text PDF

Pedestrian Re-Identification Based on Fine-Grained Feature Learning and Fusion.

Sensors (Basel)

November 2024

Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.

Video-based pedestrian re-identification (Re-ID) is used to re-identify the same person across different camera views. One of the key problems is to learn an effective representation for the pedestrian from video. However, it is difficult to learn an effective representation from one single modality of a feature due to complicated issues with video, such as background, occlusion, and blurred scenes.

View Article and Find Full Text PDF

Pedestrians' perceptions, fixations, and decisions towards automated vehicles with varied appearances.

Accid Anal Prev

December 2024

Intelligent Transportation Systems Research Center, Wuhan University of Technology, Wuhan 430063, China; Engineering Research Center of Transportation Information and Safety, Ministry of Education, Wuhan 430063, China.

Future automated vehicles (AVs) are anticipated to feature innovative exteriors, such as textual identity indications, external radars, and external human-machine interfaces (eHMIs), as evidenced by current and forthcoming on-road testing prototypes. However, given the vulnerability of pedestrians in road traffic, it remains unclear how these novel AV appearances will impact pedestrians' crossing behaviour, especially in relation to their multimodal performance, including subjective perceptions, gaze patterns, and road-crossing decisions. To address this gap, this study pioneers an investigation into the influence of AVs' exterior design, in conjunction with their kinematics, on pedestrians' road-crossing perception and decision-making.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!