Crowdsourced methods of data collection such as Amazon Mechanical Turk (MTurk) have been widely adopted in addiction science. Recent reports suggest an increase in poor quality data on MTurk, posing a challenge to the validity of findings. However, empirical investigations of data quality in addiction-related samples are lacking. In this study of individuals with alcohol use disorder (AUD), we compared poor quality delay discounting data to randomly generated data. A reanalysis of prior published delay discounting data was conducted comparing included, excluded, and randomly generated data samples. Nonsystematic criteria were implemented as a measure of data quality. The excluded data was statistically different from the included sample but did not differ from randomly generated data on multiple metrics. Moreover, a response bias was identified in the excluded data. This study provides empirical evidence that poor quality delay discounting data in an AUD sample is not statistically different from randomly generated data, suggesting data quality concerns on MTurk persist in addiction samples. These findings support the use of rigorous methods of a priori defined criteria to remove poor quality data post hoc. Additionally, it highlights that the use of nonsystematic delay discounting criteria to remove poor quality data is rigorous and not simply a way of removing data that does not conform to an expected theoretical model. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10132324PMC
http://dx.doi.org/10.1037/pha0000549DOI Listing

Publication Analysis

Top Keywords

poor quality
24
delay discounting
20
data
18
quality data
16
randomly generated
16
generated data
16
data quality
12
discounting data
12
alcohol disorder
8
quality
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!