Although texts recommend the generation of rich data from interviews, no empirical evidence base exists for achieving this. This study aimed to operationalise richness and to assess which components of the interview (for example, topic, interviewee, question) were predictive. A total of 400 interview questions and their corresponding responses were selected from 10 qualitative studies in the area of health identified from university colleagues and the UK Data Archive database. The analysis used the text analysis program, Linguistic Inquiry and Word Count, and additional rating scales. Richness was operationalised along five dimensions. 'Length of response' was predicted by a personal, less specific or positive topic, not being a layperson, later questions, open or double questions; 'personal richness' was predicted by being a healthy participant and questions about the past and future; 'analytical responses' were predicted by a personal or less specific topic, not being a layperson, later questions, questions relating to insight and causation; 'action responses' were predicted by a less specific topic, not being a layperson, being healthy, later and open questions. The model for 'descriptive richness' was not significant. Overall, open questions, located later on and framed in the present or past tense, tended to be most predictive of richness. This could inform improvements in interview technique.

Download full-text PDF

Source
http://dx.doi.org/10.1111/j.1467-9566.2010.01272.xDOI Listing

Publication Analysis

Top Keywords

topic layperson
12
topic interviewee
8
interviewee question
8
questions
8
predicted personal
8
personal specific
8
layperson questions
8
responses' predicted
8
specific topic
8
open questions
8

Similar Publications

Article Synopsis
  • The study evaluated how well ChatGPT-4 answered common questions about strabismus and amblyopia, focusing on both the quality of responses and their readability.
  • Of all responses assessed, 97% were deemed acceptable by pediatric ophthalmologists, with only 3% classified as incomplete, and no unacceptable responses found.
  • The readability scores indicated that understanding the responses required a college-level education, suggesting a need for improvement to make the information more accessible to a general audience.
View Article and Find Full Text PDF

The European Union Clinical Trials Regulation (EU CTR) provides new regulatory requirements for the preparation and submission of clinical trial documents. The United Kingdom Drug Information Association Medical Writing (UK DIA MW) Committee, with members from across the pharmaceutical industry, have reviewed the EU CTR and in this report, provide expert guidance on writing documents for submission in the EU CTR Clinical Trials Information System (CTIS) portal. Medical writers should be aware that the Investigator's Brochure containing the Reference Safety Information (RSI) must align with the annual safety report, and the RSI format must comply closely with the EU CTR.

View Article and Find Full Text PDF

Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study.

J Med Internet Res

October 2024

Center for Biomedical and Health Research in Data Sciences, Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA, United States.

Article Synopsis
  • Studies indicate that patients, especially those with low health literacy, struggle to understand medical terms in electronic health records (EHR), prompting the creation of the NoteAid dictionary to define these terms for better patient comprehension.
  • The study aimed to see if medical experts and everyday people (laypeople) agree on what counts as medical jargon, using a comparison of their identifications in EHR notes from participants recruited through Amazon Mechanical Turk.
  • Results showed that medical experts identified 59% of terms as jargon, while laypeople identified only 25.6%, with good agreement among experts and fair agreement among laypeople regarding jargon classification.
View Article and Find Full Text PDF
Article Synopsis
  • * This study investigates if audiovisual feedback devices can improve CPR performance among laypersons, focusing on non-medical caregivers who often respond first to emergencies before professionals arrive.
  • * Conducted over a two-year period at a medical college in Kochi, the study involved 146 participants, using questionnaires and considering various statistical analyses to assess the impact of audiovisual aids on CPR quality.
View Article and Find Full Text PDF

In response to intense pressure, technology companies have enacted policies to combat misinformation. The enforcement of these policies has, however, led to technology companies being regularly accused of political bias. We argue that differential sharing of misinformation by people identifying with different political groups could lead to political asymmetries in enforcement, even by unbiased policies.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!