Currently in-training evaluation in Kuwait depends on the use of the global rating scale at the end of clinical rotation clerkships. Such a scale is inconsistent, subjective, and suffers from deficiencies such as positive skewness of the distribution of ratings and poor reliability. The aim of the study was to assess the inter-rater variation and the reliability of the recently introduced Interaction Card (IC) method for evaluating clinical performance and to measure the agreement between trainees' overall performance evaluation by the currently used global rating scale and the IC summative evaluation. In the study, 370 evaluators encountered 50 trainees during their basic clinical training rotations (internal medicine, surgery, obstetrics and gynecology, and pediatrics) at six hospitals. A total of 9146 encounters were conducted focusing on six clinical performance domains: clinical skills (taking history, case sheet, and physical examination), professional behaviour, case presentation, diagnosis, therapy and handling of emergencies. The method demonstrated significant inter-rater variation in the overall IC ratings according to specialty, rank of evaluator and hospital (p < 0.001). The Interaction Card was found to be reliable, as shown by the internal consistency between the six domains (Cronbach's alpha = 0.914). There was low correlation (Spearman rank correlation coefficient, rs = 0.337), and low agreement (Kappa = 0.131) between the global rating scale and Interaction Cards summative evaluations. The IC method provided instantaneous formative feedback and summative evaluation for clinical performance to trainees. The method can be generalized to encompass training and examinations programmes for all categories of trainees in most clinical specialties.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/01421590500046429 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!