Purpose: To investigate potential threats to the validity of the spoken English proficiency ratings provided by standardised patients (SPs) in high-stakes clinical skills examinations.
Method: Spoken English ratings from 43 327 patient encounters were studied. These involved over 5000 candidates, 40% of whom were female and 33% of whom self-reported English to be their native language. Over 100 SPs were involved in the study, 51% of whom were female and 90% of whom were native English speakers. Possible performance differences in English ratings were studied as a function of candidate and SP gender, and as a function of candidate and SP native language (English versus all other languages).
Results: No significant candidate by SP gender effect was detected. There were no meaningful differences in mean English ratings as a function of SP or candidate gender. Likewise, English ratings did not vary as a function of either candidate or SP native language. While candidate mean English ratings were not associated with the native language of the SP, native English-speaking candidates did achieve significantly higher ratings.
Discussion: The lack of significant interaction between candidate and SP gender, and candidate and SP native language, suggests that the SPs provided unbiased English ratings. These results, combined with the expected higher English ratings given to candidates with English-speaking backgrounds, provides additional evidence to support the validity and fairness of spoken English proficiency ratings provided by standardised patients.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1046/j.1365-2923.2003.01400.x | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!