Publications by authors named "Karen Coetzee"

Introduction: The COVID-19 pandemic necessitated rapid adaptation of clinical competence assessments, including the transition of Objective Structured Clinical Examinations (OSCE) from in-person to virtual formats. This study investigates the construct equivalence of a high-stakes OSCE, originally designed for in-person delivery, when adapted for a virtual format.

Methods: A retrospective analysis was conducted using OSCE scores from the Internationally Educated Nurse Competency Assessment Program (IENCAP®).

View Article and Find Full Text PDF

Rationale: Objective Structured Clinical Examinations (OSCEs) are widely used for assessing clinical competence, especially in high-stakes environments such as medical licensure. However, the reuse of OSCE cases across multiple administrations raises concerns about parameter stability, known as item parameter drift (IPD). AIMS & OBJECTIVES: This study aims to investigate IPD in reused OSCE cases while accounting for examiner scoring effects using a Many-facet Rasch Measurement (MFRM) model.

View Article and Find Full Text PDF

High-stakes assessments must discriminate between examinees who are sufficiently competent to practice in the health professions and examinees who are not. In these settings, criterion-referenced standard-setting methods are strongly preferred over norm referenced methods. While there are many criterion-referenced options, few are feasible or cost effective for objective structured clinical examinations (OSCEs).

View Article and Find Full Text PDF

Examiner based variance can affect test taker outcomes. The aim of this study was to investigate the examiner-based effect of DRIFT or differential rater functioning over time. Average station level scores from five administrations of the same version of a highstakes 12-station OSCE were analyzed for the presence of DRIFT.

View Article and Find Full Text PDF

Introduction: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores.

Methods: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipulated between candidates (Study 1) and within candidates (Study 2).

View Article and Find Full Text PDF