Background: Many people with harmful addictive behaviors may not meet formal diagnostic thresholds for a disorder. A dimensional approach, by contrast, including clinical and community samples, is potentially key to early detection, prevention, and intervention. Importantly, while neurocognitive dysfunction underpins addictive behaviors, established assessment tools for neurocognitive assessment are lengthy and unengaging, difficult to administer at scale, and not suited to clinical or community needs. The BrainPark Assessment of Cognition (BrainPAC) Project sought to develop and validate an engaging and user-friendly digital assessment tool purpose-built to comprehensively assess the main consensus-driven constructs underpinning addictive behaviors.
Objective: The purpose of this study was to psychometrically validate a gamified battery of consensus-based neurocognitive tasks against standard laboratory paradigms, ascertain test-retest reliability, and determine their sensitivity to addictive behaviors (eg, alcohol use) and other risk factors (eg, trait impulsivity).
Methods: Gold standard laboratory paradigms were selected to measure key neurocognitive constructs (Balloon Analogue Risk Task [BART], Stop Signal Task [SST], Delay Discounting Task [DDT], Value-Modulated Attentional Capture [VMAC] Task, and Sequential Decision-Making Task [SDT]), as endorsed by an international panel of addiction experts; namely, response selection and inhibition, reward valuation, action selection, reward learning, expectancy and reward prediction error, habit, and compulsivity. Working with game developers, BrainPAC tasks were developed and validated in 3 successive cohorts (total N=600) and a separate test-retest cohort (N=50) via Mechanical Turk using a cross-sectional design.
Results: BrainPAC tasks were significantly correlated with the original laboratory paradigms on most metrics (r=0.18-0.63, P<.05). With the exception of the DDT k function and VMAC total points, all other task metrics across the 5 tasks did not differ between the gamified and nongamified versions (P>.05). Out of 5 tasks, 4 demonstrated adequate to excellent test-retest reliability (intraclass correlation coefficient 0.72-0.91, P<.001; except SDT). Gamified metrics were significantly associated with addictive behaviors on behavioral inventories, though largely independent of trait-based scales known to predict addiction risk.
Conclusions: A purpose-built battery of digitally gamified tasks is sufficiently valid for the scalable assessment of key neurocognitive processes underpinning addictive behaviors. This validation provides evidence that a novel approach, purported to enhance task engagement, in the assessment of addiction-related neurocognition is feasible and empirically defensible. These findings have significant implications for risk detection and the successful deployment of next-generation assessment tools for substance use or misuse and other mental disorders characterized by neurocognitive anomalies related to motivation and self-regulation. Future development and validation of the BrainPAC tool should consider further enhancing convergence with established measures as well as collecting population-representative data to use clinically as normative comparisons.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7615064 | PMC |
http://dx.doi.org/10.2196/44414 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!