Investigations into whether playing action video games (AVGs) benefit other tasks, such as driving, have traditionally focused on gaming experience (i.e., hours played). A potential problem with this approach is that AVG experience only partially indexes the cognitive skills developed via gaming, which are presumably the source of other task benefits. Thus, a focus on experience, instead of gaming proficiency (i.e., skill level), may account for inconsistencies in existing literature. We investigated whether AVG experience or proficiency best predicts performance in simulated driving. We hypothesised that proficiency would better predict driving performance (speed control, lane maintenance, spare cognitive capacity) than experience. One-hundred-and-sixteen participants drove in a simulator and played an AVG (Quake III Arena). Proficiency predicted all aspects of driving performance, while experience only predicted lane maintenance. These findings highlight the benefits of measuring AVG proficiency, with implications for the video games literature, driving safety and the transfer of skill-based learning.

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-024-82270-5DOI Listing

Publication Analysis

Top Keywords

action video
8
gaming experience
8
video games
8
avg experience
8
driving performance
8
lane maintenance
8
experience
7
proficiency
6
driving
5
evidence advantages
4

Similar Publications

Investigations into whether playing action video games (AVGs) benefit other tasks, such as driving, have traditionally focused on gaming experience (i.e., hours played).

View Article and Find Full Text PDF

Human motion similarity evaluation based on deep metric learning.

Sci Rep

December 2024

College of Sports, Beihua University, Jilin, 132000, China.

In order to eliminate the impact of camera viewpoint factors and human skeleton differences on the action similarity evaluation and to address the issue of human action similarity evaluation under different viewpoints, a method based on deep metric learning is proposed in this article. The method trains an automatic encoder-decoder deep neural network model by means of a homemade synthetic dataset, which maps the 2D human skeletal key point sequence samples extracted from motion videos into three potential low-dimensional dense spaces. Action feature vectors independent of camera viewpoint and human skeleton structure are extracted in the low-dimensional dense spaces, and motion similarity metrics are performed based on these features, thereby effectively eliminating the effects of camera viewpoint and human skeleton size differences on motion similarity evaluation.

View Article and Find Full Text PDF

Background: Pain in people with cerebral palsy (CP) has been classically underestimated and poorly treated, particularly in individuals with impaired communication skills.

Objective: To analyze changes in different salivary metabolites and pain behavior scales after a painful procedure in adults with CP and adults with typical development.

Methods: Salivary levels of sTNF-α, sIgA, Cortisol, FRAP, ADA and Alpha Amylase, as well as 3 observational pain scales (Wong-Baker, Non-Communicating Adults Pain Checklist and Facial Action Coding System) were assessed before and after an intramuscular injection in 30 Individuals with CP and 30 healthy controls.

View Article and Find Full Text PDF

Temporal Gap-Aware Attention Model for Temporal Action Proposal Generation.

J Imaging

November 2024

National Electronic and Computer Technology Center, National Science and Technology Development Agency, Khlong Nueng, Khlong Luang District, Pathum Thani 12120, Thailand.

Temporal action proposal generation is a method for extracting temporal action instances or proposals from untrimmed videos. Existing methods often struggle to segment contiguous action proposals, which are a group of action boundaries with small temporal gaps. To address this limitation, we propose incorporating an attention mechanism to weigh the importance of each proposal within a contiguous group.

View Article and Find Full Text PDF

Face to face: The eyes as an anchor in multimodal communication.

Cognition

December 2024

Department of Cognitive Science and Artificial Intelligence, Tilburg University, the Netherlands.

Making eye contact with our conversational partners is what is most common in multimodal communication. Yet, little is known about this behavior. Prior studies have reported different findings on what we look at in the narrator's face.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!