Over the past few years, an increasing number of studies have shown that playing action video games can have positive effects on tasks that involve attention and visuo-spatial cognition (e.g., visual search, enumeration tasks, tracking multiple objects). Although playing action video games can improve several cognitive functions, the intensive interaction with the exciting, challenging, intrinsically stimulating and perceptually appealing game environments may adversely affect other functions, including the ability to maintain attention when the level of stimulation is not as intense. This study investigated whether a relationship existed between action video gaming and sustained attention performance in a sample of 45 Italian teenagers. After completing a questionnaire about their video game habits, participants were divided into Action Video Game Player (AVGP) and Non-Action Video Game Player (NAVGP) groups and underwent cognitive tests. The results confirm previous findings of studies of AVGPs as they had significantly enhanced performance for instantly enumerating a set of items. Nevertheless, we found that the drop in performance over time, typical of a sustained attention task, was significantly greater in the AVGP compared with the NAVGP group. This result is consistent with our hypothesis and demonstrates a negative effect of playing action video games.

Download full-text PDF

Source
http://dx.doi.org/10.1080/17470218.2017.1310912DOI Listing

Publication Analysis

Top Keywords

action video
24
sustained attention
12
playing action
12
video games
12
video game
12
video gaming
8
gaming sustained
8
game player
8
video
7
action
6

Similar Publications

Investigations into whether playing action video games (AVGs) benefit other tasks, such as driving, have traditionally focused on gaming experience (i.e., hours played).

View Article and Find Full Text PDF

Human motion similarity evaluation based on deep metric learning.

Sci Rep

December 2024

College of Sports, Beihua University, Jilin, 132000, China.

In order to eliminate the impact of camera viewpoint factors and human skeleton differences on the action similarity evaluation and to address the issue of human action similarity evaluation under different viewpoints, a method based on deep metric learning is proposed in this article. The method trains an automatic encoder-decoder deep neural network model by means of a homemade synthetic dataset, which maps the 2D human skeletal key point sequence samples extracted from motion videos into three potential low-dimensional dense spaces. Action feature vectors independent of camera viewpoint and human skeleton structure are extracted in the low-dimensional dense spaces, and motion similarity metrics are performed based on these features, thereby effectively eliminating the effects of camera viewpoint and human skeleton size differences on motion similarity evaluation.

View Article and Find Full Text PDF

Background: Pain in people with cerebral palsy (CP) has been classically underestimated and poorly treated, particularly in individuals with impaired communication skills.

Objective: To analyze changes in different salivary metabolites and pain behavior scales after a painful procedure in adults with CP and adults with typical development.

Methods: Salivary levels of sTNF-α, sIgA, Cortisol, FRAP, ADA and Alpha Amylase, as well as 3 observational pain scales (Wong-Baker, Non-Communicating Adults Pain Checklist and Facial Action Coding System) were assessed before and after an intramuscular injection in 30 Individuals with CP and 30 healthy controls.

View Article and Find Full Text PDF

Temporal Gap-Aware Attention Model for Temporal Action Proposal Generation.

J Imaging

November 2024

National Electronic and Computer Technology Center, National Science and Technology Development Agency, Khlong Nueng, Khlong Luang District, Pathum Thani 12120, Thailand.

Temporal action proposal generation is a method for extracting temporal action instances or proposals from untrimmed videos. Existing methods often struggle to segment contiguous action proposals, which are a group of action boundaries with small temporal gaps. To address this limitation, we propose incorporating an attention mechanism to weigh the importance of each proposal within a contiguous group.

View Article and Find Full Text PDF

Face to face: The eyes as an anchor in multimodal communication.

Cognition

December 2024

Department of Cognitive Science and Artificial Intelligence, Tilburg University, the Netherlands.

Making eye contact with our conversational partners is what is most common in multimodal communication. Yet, little is known about this behavior. Prior studies have reported different findings on what we look at in the narrator's face.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!