This paper draws on individual-level data from the National Study of Family Growth (NSFG) to identify likely underreporters of abortion and miscarriage and examine their characteristics. The NSFG asks about abortion and miscarriage twice, once in the computer-assisted personal interviewing (CAPI) part of the questionnaire and the other in the audio computer-assisted self-interviewing (ACASI) part. We used two different methods to identify likely underreporters of abortion and miscarriage: direct comparison of answers obtained from CAPI and ACASI and latent class models.
View Article and Find Full Text PDFThe usual method for assessing the reliability of survey data has been to conduct reinterviews a short interval (such as one to two weeks) after an initial interview and to use these data to estimate relatively simple statistics, such as gross difference rates (GDRs). More sophisticated approaches have also been used to estimate reliability. These include estimates from multi-trait, multi-method experiments, models applied to longitudinal data, and latent class analyses.
View Article and Find Full Text PDFJ Surv Stat Methodol
November 2020
Using reinterview data from the PATH Reliability and Validity (PATH-RV) study, we examine the characteristics of questions and respondents that predict the reliability of the answers. In the PATH-RV study, 524 respondents completed an interview twice, five to twenty-four days apart. We coded a number of question characteristics and used them to predict the gross discrepancy rates (GDRs) and kappas for each question.
View Article and Find Full Text PDFAlthough most survey researchers agree that reliability is a critical requirement for survey data, there have not been many efforts to assess the reliability of responses in national surveys. In addition, there are quite different approaches to studying the reliability of survey responses. In the first section of the Lecture, I contrast a psychological theory of over-time consistency with three statistical models that use reinterview data, multi-trait multi-method experiments, and three-wave panel data to estimate reliability.
View Article and Find Full Text PDFJ Surv Stat Methodol
February 2021
[This corrects the article DOI: 10.1093/jssam/smz034.][This corrects the article DOI: 10.
View Article and Find Full Text PDFAm J Public Health
October 2019
Introduction: This paper reports a study done to estimate the reliability and validity of answers to the Youth and Adult questionnaires of the Population Assessment of Tobacco and Health (PATH) Study.
Methods: 407 adults and 117 youth respondents completed the wave 4 (2016-2017) PATH Study interview twice, 6-24 days apart. The reinterview data were used to estimate the reliability of answers to the questionnaire.
It is well known that some survey respondents reduce the effort they invest in answering questions by taking mental shortcuts - survey satisficing. This is a concern because such shortcuts can reduce the quality of responses and, potentially, the accuracy of survey estimates. This article explores "speeding," an extreme type of satisficing, which we define as answering so quickly that respondents could not have given much, if any, thought to their answers.
View Article and Find Full Text PDFGrid or matrix questions are associated with a number of problems in Web surveys. In this paper, we present results from two experiments testing the design of grid questions to reduce breakoffs, missing data, and satisficing. The first examines dynamic elements to help guide respondent through the grid, and on splitting a larger grid into component pieces.
View Article and Find Full Text PDFThis paper presents results from six experiments that examine the effect of the position of an item on the screen on the evaluative ratings it receives. The experiments are based on the idea that respondents expect "good" things-those they view positively-to be higher up on the screen than "bad" things. The experiments use items on different topics (Congress and HMOs, a variety of foods, and six physician specialties) and different methods for varying their vertical position on the screen.
View Article and Find Full Text PDFLatent class analysis (LCA) has been hailed as a promising technique for studying measurement errors in surveys, because the models produce estimates of the error rates associated with a given question. Still, the issue arises as to how accurate these error estimates are and under what circumstances they can be relied on. Skeptics argue that latent class models can understate the true error rates and at least one paper (Kreuter et al.
View Article and Find Full Text PDFWeb surveys often collect information such as frequencies, currency amounts, dates, or other items requiring short structured answers in an open-ended format, typically using text boxes for input. We report on several experiments exploring design features of such input fields. We find little effect of the size of the input field on whether frequency or dollar amount answers are well-formed or not.
View Article and Find Full Text PDFA near ubiquitous feature of user interfaces is feedback on task completion or progress indicators such as the graphical bar that grows as more of the task is completed. The presumed benefit is that users will be more likely to complete the task if they see they are making progress but it is also possible that feedback indicating slow progress may sometimes discourage users from completing the task. This paper describes two experiments that evaluate the impact of progress indicators on the completion of on-line questionnaires.
View Article and Find Full Text PDFSurvey respondents may misinterpret the questions they are asked, potentially undermining the accuracy of their answers. One way to reduce this risk is to make definitions of key question concepts available to the respondents. In the current study we compared two methods of making definitions available to web survey respondents - displaying the definition with the question text and displaying the definition when respondents roll the mouse over the relevant question terms.
View Article and Find Full Text PDFSurvey researchers since Cannell have worried that respondents may take various shortcuts to reduce the effort needed to complete a survey. The evidence for such shortcuts is often indirect. For instance, preferences for earlier versus later response options have been interpreted as evidence that respondents do not read beyond the first few options.
View Article and Find Full Text PDFPsychologists have worried about the distortions introduced into standardized personality measures by social desirability bias. Survey researchers have had similar concerns about the accuracy of survey reports about such topics as illicit drug use, abortion, and sexual behavior. The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability.
View Article and Find Full Text PDFSurveys of sensitive topics, such as the Injury Control and Risk Surveys (ICARIS) or the Behavioral Risk Factors Surveillance System (BRFSS), are often conducted by telephone using random-digit-dial (RDD) sampling methods. Although this method of data collection is relatively quick and inexpensive, it suffers from growing coverage problems and falling response rates. In this paper, several alternative methods of data collection are reviewed, including audio computer-assisted interviews as part of personal visit surveys, mail surveys, web surveys, and interactive voice response surveys.
View Article and Find Full Text PDFSurveys reflect societal change in a way that few other research tools do. Over the past two decades, three developments have transformed surveys. First, survey organizations have adopted new methods for selecting telephone samples; these new methods were made possible by the creation of large databases that include all listed telephone numbers in the United States.
View Article and Find Full Text PDF