The study of data quality in crowdsourcing campaigns is currently a prominent research topic, given the diverse range of participants involved. A potential solution to enhancing data quality processes in crowdsourcing is cognitive personalization, which involves appropriately adapting or assigning tasks based on a crowd worker's cognitive profile. There are two common methods for assessing a crowd worker's cognitive profile: administering online cognitive tests, and inferring behavior from task fingerprinting based on user interaction log events. This article presents the findings of a study that investigated the complementarity of both approaches in a microtask scenario, focusing on personalizing task design. The study involved 134 unique crowd workers recruited from a crowdsourcing marketplace. The main objective was to examine how the administration of cognitive ability tests can be used to allocate crowd workers to microtasks with varying levels of difficulty, including the development of a deep learning model. Another goal was to investigate if task fingerprinting can be used to allocate crowd workers to different microtasks in a personalized manner. The results indicated that both objectives were accomplished, validating the usage of cognitive tests and task fingerprinting as effective mechanisms for microtask personalization, including the development of a deep learning model with 95% accuracy in predicting the accuracy of the microtasks. While we achieved an accuracy of 95%, it is important to note that the small dataset size may have limited the model's performance.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10098703 | PMC |
http://dx.doi.org/10.3390/s23073571 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!