AI Article Synopsis

  • ADHD impacts many children, emphasizing the need for early detection through eye movement analysis linked to attention and memory skills.
  • Researchers gathered eye-tracking data from children with ADHD and typically developing children using various behavioral tasks to explore this relationship.
  • Machine learning identified 33 eye-tracking features that could successfully distinguish ADHD cases, achieving a 76.3% accuracy rate, comparable to traditional attention tests.

Article Abstract

Introduction: Attention-deficit/hyperactivity disorder (ADHD) affects a significant proportion of the pediatric population, making early detection crucial for effective intervention. Eye movements are controlled by brain regions associated with neuropsychological functions, such as selective attention, response inhibition, and working memory, and their deficits are related to the core characteristics of ADHD. Herein, we aimed to develop a screening model for ADHD using machine learning (ML) and eye-tracking features from tasks that reflect neuropsychological deficits in ADHD.

Methods: Fifty-six children (mean age 8.38 ± 1.58, 45 males) diagnosed with ADHD based on the Diagnostic and Statistical Manual of Mental Disorders, fifth edition were recruited along with seventy-nine typically developing children (TDC) (mean age 8.80 ± 1.82, 33 males). Eye-tracking data were collected using a digital device during the performance of five behavioral tasks measuring selective attention, working memory, and response inhibition (pro-saccade task, anti-saccade task, memory-guided saccade task, change detection task, and Stroop task). ML was employed to select relevant eye-tracking features for ADHD, and to subsequently construct an optimal model classifying ADHD from TDC.

Results: We identified 33 eye-tracking features in the five tasks with the potential to distinguish children with ADHD from TDC. Participants with ADHD showed increased saccade latency and degree, and shorter fixation time in eye-tracking tasks. A soft voting model integrating extra tree and random forest classifiers demonstrated high accuracy (76.3%) at identifying ADHD using eye-tracking features alone. A comparison of the model using only eye-tracking features with models using the Advanced Test of Attention or Stroop test showed no significant difference in the area under the curve (AUC) (p = 0.419 and p=0.235, respectively). Combining demographic, behavioral, and clinical data with eye-tracking features improved accuracy, but did not significantly alter the AUC (p=0.208).

Discussion: Our study suggests that eye-tracking features hold promise as ADHD screening tools, even when obtained using a simple digital device. The current findings emphasize that eye-tracking features could be reliable indicators of impaired neurobiological functioning in individuals with ADHD. To enhance utility as a screening tool, future research should be conducted with a larger sample of participants with a more balanced gender ratio.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10902460PMC
http://dx.doi.org/10.3389/fpsyt.2024.1337595DOI Listing

Publication Analysis

Top Keywords

eye-tracking features
32
adhd
12
eye-tracking
10
adhd screening
8
machine learning
8
selective attention
8
response inhibition
8
working memory
8
features
8
features tasks
8

Similar Publications

Driving-Related Cognitive Abilities Prediction Based on Transformer's Multimodal Fusion Framework.

Sensors (Basel)

December 2024

Faculty of Information Science and Technology, Beijing University of Technology, Beijing 100124, China.

With the increasing complexity of urban roads and rising traffic flow, traffic safety has become a critical societal concern. Current research primarily addresses drivers' attention, reaction speed, and perceptual abilities, but comprehensive assessments of cognitive abilities in complex traffic environments are lacking. This study, grounded in cognitive science and neuropsychology, identifies and quantitatively evaluates ten cognitive components related to driving decision-making, execution, and psychological states by analyzing video footage of drivers' actions.

View Article and Find Full Text PDF

Background And Aim: The progressive nature of type 2 diabetes often, in time, necessitates basal insulin therapy to achieve glycemic targets. However, despite standardized titration algorithms, many people remain poorly controlled after initiating insulin therapy, leading to suboptimal glycemic control and complications. Both healthcare professionals and people with type 2 diabetes have expressed the need for novel tools to aid in this process.

View Article and Find Full Text PDF

Trust is a crucial human factor in automated supervisory control tasks. To attain appropriate reliance, the operator's trust should be calibrated to reflect the system's capabilities. This study utilized eye-tracking technology to explore novel approaches, given the intrusive, subjective, and sporadic characteristics of existing trust measurement methods.

View Article and Find Full Text PDF

Background: Eye movement research serves as a critical tool for assessing brain function, diagnosing neurological and psychiatric disorders, and understanding cognition and behavior. Sex differences have largely been under reported or ignored in neurological research. However, eye movement features provide biomarkers that are useful for disease classification with superior accuracy and robustness compared to previous classifiers for neurological diseases.

View Article and Find Full Text PDF

Comprehensive VR dataset for machine learning: Head- and eye-centred video and positional data.

Data Brief

December 2024

Department of Neurophysics, Philipps University Marburg, Karl-von-Frisch Straße 8a, 35043 Marburg, Hesse, Germany.

We present a comprehensive dataset comprising head- and eye-centred video recordings from human participants performing a search task in a variety of Virtual Reality (VR) environments. Using a VR motion platform, participants navigated these environments freely while their eye movements and positional data were captured and stored in CSV format. The dataset spans six distinct environments, including one specifically for calibrating the motion platform, and provides a cumulative playtime of over 10 h for both head- and eye-centred perspectives.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!