Over the past few years, an increasing number of studies have shown that playing action video games can have positive effects on tasks that involve attention and visuo-spatial cognition (e.g., visual search, enumeration tasks, tracking multiple objects). Although playing action video games can improve several cognitive functions, the intensive interaction with the exciting, challenging, intrinsically stimulating and perceptually appealing game environments may adversely affect other functions, including the ability to maintain attention when the level of stimulation is not as intense. This study investigated whether a relationship existed between action video gaming and sustained attention performance in a sample of 45 Italian teenagers. After completing a questionnaire about their video game habits, participants were divided into Action Video Game Player (AVGP) and Non-Action Video Game Player (NAVGP) groups and underwent cognitive tests. The results confirm previous findings of studies of AVGPs as they had significantly enhanced performance for instantly enumerating a set of items. Nevertheless, we found that the drop in performance over time, typical of a sustained attention task, was significantly greater in the AVGP compared with the NAVGP group. This result is consistent with our hypothesis and demonstrates a negative effect of playing action video games.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/17470218.2017.1310912 | DOI Listing |
Sci Rep
December 2024
School of Psychological Science, The University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia.
Investigations into whether playing action video games (AVGs) benefit other tasks, such as driving, have traditionally focused on gaming experience (i.e., hours played).
View Article and Find Full Text PDFSci Rep
December 2024
College of Sports, Beihua University, Jilin, 132000, China.
In order to eliminate the impact of camera viewpoint factors and human skeleton differences on the action similarity evaluation and to address the issue of human action similarity evaluation under different viewpoints, a method based on deep metric learning is proposed in this article. The method trains an automatic encoder-decoder deep neural network model by means of a homemade synthetic dataset, which maps the 2D human skeletal key point sequence samples extracted from motion videos into three potential low-dimensional dense spaces. Action feature vectors independent of camera viewpoint and human skeleton structure are extracted in the low-dimensional dense spaces, and motion similarity metrics are performed based on these features, thereby effectively eliminating the effects of camera viewpoint and human skeleton size differences on motion similarity evaluation.
View Article and Find Full Text PDFPLoS One
December 2024
Research Institute on Health Sciences (IUNICS-IdISBa), University of the Balearic Islands, Palma de Mallorca, Spain.
Background: Pain in people with cerebral palsy (CP) has been classically underestimated and poorly treated, particularly in individuals with impaired communication skills.
Objective: To analyze changes in different salivary metabolites and pain behavior scales after a painful procedure in adults with CP and adults with typical development.
Methods: Salivary levels of sTNF-α, sIgA, Cortisol, FRAP, ADA and Alpha Amylase, as well as 3 observational pain scales (Wong-Baker, Non-Communicating Adults Pain Checklist and Facial Action Coding System) were assessed before and after an intramuscular injection in 30 Individuals with CP and 30 healthy controls.
J Imaging
November 2024
National Electronic and Computer Technology Center, National Science and Technology Development Agency, Khlong Nueng, Khlong Luang District, Pathum Thani 12120, Thailand.
Temporal action proposal generation is a method for extracting temporal action instances or proposals from untrimmed videos. Existing methods often struggle to segment contiguous action proposals, which are a group of action boundaries with small temporal gaps. To address this limitation, we propose incorporating an attention mechanism to weigh the importance of each proposal within a contiguous group.
View Article and Find Full Text PDFCognition
December 2024
Department of Cognitive Science and Artificial Intelligence, Tilburg University, the Netherlands.
Making eye contact with our conversational partners is what is most common in multimodal communication. Yet, little is known about this behavior. Prior studies have reported different findings on what we look at in the narrator's face.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!