Objective: To develop a more reliable coding method of medical interviewing focused on data-gathering and emotion-handling.
Methods: Two trained (30h) undergraduates rated videotaped interviews from 127 resident-simulated patient (SP) interactions. Trained on 45 videotapes, raters coded 25 of 127 study set tapes for patient-centeredness. Guetzkow's U, Cohen's Kappa, and percent of agreement were used to measure raters' reliability in unitizing and coding residents' skills for eliciting: agenda (3 yes/no items), physical story (2), personal story (6), emotional story (15), using indirect skills (4), and general patient-centeredness (3).
Results: 45 items were dichotomized from the earlier, Likert scale-based method and were reduced to 33 during training. Guetzkow's U ranged from 0.00 to 0.087. Kappa ranged from 0.86 to 1.00 for the 6 variables and 33 individual items. The overall kappa was 0.90, and percent of agreement was 97.5%. Percent of agreement by item ranged from 84 to 100%.
Conclusions: A simple, highly reliable coding method, weighted (by no. of items) to highlight personal elements of an interview, was developed and is recommended as a criterion standard research coding method.
Practice Implications: An easily conducted, reliable coding procedure can be the basis for everyday questionnaires like patient satisfaction with patient-centeredness.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.pec.2016.10.003 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!