Problem: As medical schools move from discipline-based courses to more integrated approaches, identifying assessment tools that parallel this change is an important goal.
Approach: The authors describe the use of test item statistics to assess the reliability and validity of web-enabled mechanistic case diagrams (MCDs) as a potential tool to assess students' ability to integrate basic science and clinical information. Students review a narrative clinical case and construct an MCD using items provided by the case author. Students identify the relationships among underlying risk factors, etiology, pathogenesis and pathophysiology, and the patients' signs and symptoms. They receive one point for each correctly identified link.
Outcomes: In 2014-2015 and 2015-2016, case diagrams were implemented in consecutive classes of 150 medical students. The alpha reliability coefficient for the overall score, constructed using each student's mean proportion correct across all cases, was 0.82. Discrimination indices for each of the case scores with the overall score ranged from 0.23 to 0.51. In a G study using those students with complete data (n = 251) on all 16 cases, 10% of the variance was true score variance, and systematic case variance was large. Using 16 cases generated a G coefficient (relative score reliability) equal to 0.72 and a Phi equal to 0.65.
Next Steps: The next phase of the project will involve deploying MCDs in higher-stakes settings to determine whether similar results can be achieved. Further analyses will determine whether these assessments correlate with other measures of higher-order thinking skills.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/ACM.0000000000002184 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!