Publications by authors named "Godfrey Pell"

There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where 'items' have the potential to become weighted differently, but the literature does not always reflect these broader conceptions. We consider theoretical, design and (clinically trained) assessor issues related to differential item weighting in checklist scoring of OSCEs stations.

View Article and Find Full Text PDF

Borderline regression (BRM) is considered problematic in small cohort OSCEs (e.g.  < 50), with institutions often relying on item-centred standard setting approaches which can be resource intensive and lack defensibility in performance tests.

View Article and Find Full Text PDF

Introduction: In recent decades, there has been a move towards standardized models of assessment where all students sit the same test (e.g. OSCE).

View Article and Find Full Text PDF

Introduction: Many standard setting procedures focus on the performance of the "borderline" group, defined through expert judgments by assessors. In performance assessments such as Objective Structured Clinical Examinations (OSCEs), these judgments usually apply at the station level.

Methods And Results: Using largely descriptive approaches, we analyze the assessment profile of OSCE candidates at the end of a five year undergraduate medical degree program to investigate the consistency of the borderline group across stations.

View Article and Find Full Text PDF

Context: There is a growing body of research investigating assessor judgments in complex performance environments such as OSCE examinations. Post hoc analysis can be employed to identify some elements of "unwanted" assessor variance. However, the impact of individual, apparently "extreme" assessors on OSCE quality, assessment outcomes and pass/fail decisions has not been previously explored.

View Article and Find Full Text PDF

Background: The use of the borderline regression method (BRM) is a widely accepted standard setting method for OSCEs. However, it is unclear whether this method is appropriate for use with small cohorts (e.g.

View Article and Find Full Text PDF

Background: When measuring assessment quality, increasing focus is placed on the value of station-level metrics in the detection and remediation of problems in the assessment.

Aims: This article investigates how disparity between checklist scores and global grades in an Objective Structured Clinical Examination (OSCE) can provide powerful new insights at the station level whenever such disparities occur and develops metrics to indicate when this is a problem.

Method: This retrospective study uses OSCE data from multiple examinations to investigate the extent to which these new measurements of disparity complement existing station-level metrics.

View Article and Find Full Text PDF

Context: Models of short-term remediation for failing students are typically associated with improvements in candidate performance at retest. However, the process is costly to deliver, particularly for performance retests with objective structured clinical examinations (OSCEs), and there is increasing evidence that these traditional models are associated with the longitudinal underperformance of candidates.

Methods: Rather than a traditional OSCE model, sequential testing involves a shorter 'screening' format, with an additional 'sequential' test for candidates who fail to meet the screening standard.

View Article and Find Full Text PDF

Objective Structured Clinical Examinations (OSCEs) are a key component within many healthcare assessment programmes. Quality assurance is designed to ensure rigour and credibility in decision making for both candidates and institutions, and most commonly expressed by a single measure of reliability. How overall reliability interrelates with OSCE station level analyses is less well established, especially with respect to the impact of quality improvements.

View Article and Find Full Text PDF

With an increasing use of criterion-based assessment techniques in both undergraduate and postgraduate healthcare programmes, there is a consequent need to ensure the quality and rigour of these assessments. The obvious question for those responsible for delivering assessment is how is this 'quality' measured, and what mechanisms might there be that allow improvements in assessment quality over time to be demonstrated? Whilst a small base of literature exists, few papers give more than one or two metrics as measures of quality in Objective Structured Clinical Examinations (OSCEs). In this guide, aimed at assessment practitioners, the authors aim to review the metrics that are available for measuring quality and indicate how a rounded picture of OSCE assessment quality may be constructed by using a variety of such measures, and also to consider which characteristics of the OSCE are appropriately judged by which measure(s).

View Article and Find Full Text PDF

Background: In Objective Structured Clinical Examinations (OSCEs), the use of simulated patients (SPs) at many stations is a key aspect of the assessment. Often the SPs are asked to provide formal feedback (ratings) of their experience with the students under examination.

Aims: This study analyses whether and how exactly SP data can be best used to enhance the robustness of the formal standard setting process.

View Article and Find Full Text PDF

Background: A wide range of social software has become readily available to young people. There is increasing interest in the exciting possibilities of using social software for undergraduate medical education.

Aims: To identify the nature and extent of the use of social software by first year medical students.

View Article and Find Full Text PDF

Context: Medical schools in the UK set their own graduating examinations and pass marks. In a previous study we examined the equivalence of passing standards using the Angoff standard-setting method. To address the limitation this imposed on that work, we undertook further research using a standard-setting method specifically designed for objective structured clinical examinations (OSCEs).

View Article and Find Full Text PDF

Background: Transition from school to university life involves maturation changes in areas of academic and personal life.

Method: Evaluation of factors involved was studied though analysis of appraisal interview outcomes during the first two years, which documented achievements and goal setting in 511 medical students (98% of two student-year cohorts). Qualitative analysis identified key issues in study skills, aspects of personal lives and differences in approach to university life.

View Article and Find Full Text PDF

While Objective Structured Clinical Examinations (OSCEs) have become widely used to assess clinical competence at the end of undergraduate medical courses, the method of setting the passing score varies greatly, and there is no agreed best methodology. While there is an assumption that the passing standard at graduation is the same at all medical schools, there is very little quantitative evidence in the field. In the United Kingdom, there is no national licensing examination; each medical school sets its own graduating assessment and successful completion by candidates leads to the licensed right to practice by the General Medical Council.

View Article and Find Full Text PDF