Background: No method of standard setting for objective structured clinical examinations (OSCEs) is perfect. Using scores aggregated across stations risks allowing students who are incompetent in some core skills to pass an examination, which may not be acceptable for high stakes assessments.

Aim: To assess the feasibility of using a factor analysis of station scores in a high stakes OSCE to derive measures of underlying competencies.

Methods: A 12-station OSCE was administered to all 192 students in the penultimate undergraduate year at the University of Aberdeen Medical School. Analysis of the correlation table of station scores was used to exclude stations performing unreliably. Factor analysis of the remaining station scores was carried out to characterise the underlying competencies being assessed. Factor scores were used to derive pass/fail cut-off scores for the examination.

Results: Four stations were identified as having unpredicted variations in station scores. Analysis of the content of these stations allowed the underlying problems with the station designs to be isolated. Factor analysis of the remaining 8 stations revealed 3 main underlying factors, accounting for 53% of the total variance in scores. These were labelled "examination skills", "communication skills" and "history taking skills".

Conclusion: Factor analysis is a useful tool for characterising and quantifying the skills that are assessed in an OSCE. Standard setting procedures can be used to calculate cut-off scores for each underlying factor.

Download full-text PDF

Source
http://dx.doi.org/10.1111/j.1365-2929.2004.01821.xDOI Listing

Publication Analysis

Top Keywords

factor analysis
20
station scores
16
standard setting
12
high stakes
12
scores
9
stakes osce
8
analysis remaining
8
cut-off scores
8
factor
7
analysis
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!