Methods to identify carelessness in survey research can be valuable tools in reducing bias during survey development, validation, and use. Because carelessness may take multiple forms, researchers typically use multiple indices when identifying carelessness. In the current study, we extend the literature on careless response identification by examining the usefulness of three item-response theory-based person-fit indices for both random and overconsistent careless response identification: infit outfit , and the polytomous statistic. We compared these statistics with traditional careless response indices using both empirical data and simulated data. The empirical data included 2,049 high school student surveys of teaching effectiveness from the Network for Educator Effectiveness. In the simulated data, we manipulated type of carelessness (random response or overconsistency) and percent of carelessness present (0%, 5%, 10%, 20%). Results suggest that infit and outfit and the statistic may provide complementary information to traditional indices such as LongString, Mahalanobis Distance, Validity Items, and Completion Time. Receiver operating characteristic curves suggested that the person-fit indices showed good sensitivity and specificity for classifying both over-consistent and under-consistent careless patterns, thus functioning in a bidirectional manner. Carelessness classifications based on low fit values correlated with carelessness classifications from LongString and completion time, and classifications based on high fit values correlated with classifications from Mahalanobis Distance. We consider implications for research and practice.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10552731 | PMC |
http://dx.doi.org/10.1177/01466216231194358 | DOI Listing |
Curr Issues Personal Psychol
March 2024
: University of Texas, Arlington, United States.
Background: This study examined individual differences in how people behave in response to a pandemic - more specifically, the current coronavirus pandemic.
Participants And Procedure: A sample of 420 participants was recruited through the online data collection platform MTurk. Participants were directed via an online link to a Qualtrics survey.
Background: The use of ecological momentary assessment (EMA) designs has been on the rise in mental health epidemiology. However, there is a lack of knowledge of the determinants of participation in and compliance with EMA studies, reliability of measures, and underreporting of methodological details and data quality indicators.
Objective: This study aims to evaluate the quality of EMA data in a large sample of university students by estimating participation rate and mean compliance, identifying predictors of individual-level participation and compliance, evaluating between- and within-person reliability of measures of negative and positive affect, and identifying potential careless responding.
Br J Math Stat Psychol
December 2024
Centre for Educational Measurement, University of Oslo, Oslo, Norway.
Front Psychol
October 2024
Department of Self-Development Skills, King Saud University, Riyadh, Saudi Arabia.
Careless responding measures are important for several purposes, whether it's screening for careless responding or for research centered on careless responding as a substantive variable. One such approach for assessing carelessness in surveys is the use of an instructional manipulation check. Despite its apparent popularity, little is known about the construct validity of instructional manipulation checks as measures of careless responding.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!