In response to Webb and Tangney (2022) we call into question the conclusion that data collected on Amazon's Mechanical Turk (MTurk) was "at best-only 2.6% valid" (p. 1). We suggest that Webb and Tangney made certain choices during the study-design and data-collection process that adversely affected the quality of the data collected. As a result, the anecdotal experience of these authors provides weak evidence that MTurk provides low-quality data as implied. In our commentary we highlight best practice recommendations and make suggestions for more effectively collecting and screening online panel data.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/17456916241234328 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!