Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

Acad Med

Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335. A. Hyderi is associate dean for curriculum and associate professor, Department of Family Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois. N. Heine is assistant professor, Department of Medical Education and Department of Medicine, and director, Clinical Skills Education Center, Loma Linda University School of Medicine, Loma Linda, California; ORCID: http://orcid.org/0000-0001-6812-9079. W. May is professor, Department of Medical Education, and director, Clinical Skills Education and Evaluation Center, Keck School of Medicine of the University of Southern California, Los Angeles, California. A. Nevins is clinical associate professor, Department of Medicine, Stanford University School of Medicine, Palo Alto, California. M. Lee is professor of medical education, University of California, Los Angeles David Geffen School of Medicine, Los Angeles, California. G. Bordage is professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. R. Yudkowsky is director, Graham Clinical Performance Center, and professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-2145-7582.

Published: November 2017

Purpose: To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs).

Method: Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores.

Results: Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems.

Conclusions: This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.

Download full-text PDF

Source
http://dx.doi.org/10.1097/ACM.0000000000001918DOI Listing

Publication Analysis

Top Keywords

validity evidence
16
medical schools
16
scoring guidelines
12
standardized patient
8
patient encounters
8
patient notes
8
multisite study
8
training protocols
8
protocols guidelines
8
variance components
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!