Publications by authors named "Andre De Champlain"

Automatic Item Generation (AIG) refers to the process of using cognitive models to generate test items using computer modules. It is a new but rapidly evolving research area where cognitive and psychometric theory are combined into digital framework. However, assessment of the item quality, usability and validity of AIG relative to traditional item development methods lacks clarification.

View Article and Find Full Text PDF

Despite the increased emphasis on the use of workplace-based assessment in competency-based education models, there is still an important role for the use of multiple choice questions (MCQs) in the assessment of health professionals. The challenge, however, is to ensure that MCQs are developed in a way to allow educators to derive meaningful information about examinees' abilities. As educators' needs for high-quality test items have evolved so has our approach to developing MCQs.

View Article and Find Full Text PDF

There exists an assumption that improving medical education will improve patient care. While seemingly logical, this premise has rarely been investigated. In this Invited Commentary, the authors propose the use of big data to test this assumption.

View Article and Find Full Text PDF

Unlabelled: Construct: Valid score interpretation is important for constructs in performance assessments such as objective structured clinical examinations (OSCEs). An OSCE is a type of performance assessment in which a series of standardized patients interact with the student or candidate who is scored by either the standardized patient or a physician examiner.

Background: In high-stakes examinations, test security is an important issue.

View Article and Find Full Text PDF

With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs).

View Article and Find Full Text PDF

Purpose: The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada's Qualifying Examination Part I (MCCQEI) based on item response theory.

Methods: Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases.

View Article and Find Full Text PDF

Unlabelled: CONSTRUCT: Automatic item generation (AIG) is an alternative method for producing large numbers of test items that integrate cognitive modeling with computer technology to systematically generate multiple-choice questions (MCQs). The purpose of our study is to describe and validate a method of generating plausible but incorrect distractors. Initial applications of AIG demonstrated its effectiveness in producing test items.

View Article and Find Full Text PDF

We present a framework for technology-enhanced scoring of bilingual clinical decision-making (CDM) questions using an open-source scoring technology and evaluate the strength of the proposed framework using operational data from the Medical Council of Canada Qualifying Examination. Candidates' responses from six write-in CDM questions were used to develop a three-stage-automated scoring framework. In Stage 1, the linguistic features from CDM responses were extracted.

View Article and Find Full Text PDF

Context: Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment.

View Article and Find Full Text PDF

Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering qualification examinations to physician candidates at several locations and institutions. This study explores the impact of site location as a source of score variation in a large-scale national assessment used to measure the readiness of internationally educated physician candidates for residency programs.

View Article and Find Full Text PDF

Context: First-year residents begin clinical practice in settings in which attending staff and senior residents are available to supervise their work. There is an expectation that, while being supervised and as they become more experienced, residents will gradually take on more responsibilities and function independently.

Objectives: This study was conducted to define 'entrustable professional activities' (EPAs) and determine the extent of agreement between the level of supervision expected by clinical supervisors (CSs) and the level of supervision reported by first-year residents.

View Article and Find Full Text PDF

Background: Past research suggests that the use of externally-applied scoring weights may not appreciably impact measurement qualities such as reliability or validity. Nonetheless, some credentialing boards and academic institutions apply differential scoring weights based on expert opinion about the relative importance of individual items or test components of Observed Structured Clinical Examinations (OSCEs).

Aims: To investigate the impact of simplified scoring models that make little to no use of differential weighting on the reliability of scores and decisions on a high stakes OSCE required for medical licensure in Canada.

View Article and Find Full Text PDF

Background: Tutorial-based assessment commonly used in problem-based learning (PBL) is thought to provide information about students which is different from that gathered with traditional assessment strategies such as multiple-choice questions or short-answer questions. Although multiple-observations within units in an undergraduate medical education curriculum foster more reliable scores, that evaluation design is not always practically feasible. Thus, this study investigated the overall reliability of a tutorial-based program of assessment, namely the Tutotest-Lite.

View Article and Find Full Text PDF

Background: High stakes medical licensing programs are planning to augment and adapt current examinations to be relevant for a two-decision point model for licensure: entry into supervised practice and entry into unsupervised practice. Therefore, identifying which skills should be assessed at each decision point is critical for informing examination development, and gathering input from residency program directors is important.

Methods: Using data from previously developed surveys and expert panels, a web-delivered survey was distributed to 3,443 residency program directors.

View Article and Find Full Text PDF

Introduction: : It is not known whether a Standardized Patient's (SP's) performing arts background could affect his or her accuracy in recording candidate performance on a high-stakes clinical skills examination, such as the Comprehensive Osteopathic Medical Licensing Examination Level 2 Performance Evaluation. The purpose of this study is to investigate the differences in recording accuracy of history and physical checklist items between SPs who identify themselves as performing artists and SPs with no performance arts experience.

Methods: : Forty SPs identified themselves as being performing artists or nonperforming artists.

View Article and Find Full Text PDF

Background: Though progress tests have been used for several decades in various medical education settings, a few studies have offered analytic frameworks that could be used by practitioners to model growth of knowledge as a function of curricular and other variables of interest.

Aim: To explore the use of one form of progress testing in clinical education by modeling growth of knowledge in various disciplines as well as by assessing the impact of recent training (core rotation order) on performance using hierarchical linear modeling (HLM) and analysis of variance (ANOVA) frameworks.

Methods: This study included performances across four test administrations occurring between July 2006 and July 2007 for 130 students from a US medical school who graduated in 2008.

View Article and Find Full Text PDF

Background: To gather evidence of external validity for the Foundations of Medicine (FOM) examination by assessing the relationship between its subscores and local grades for a sample of Portuguese medical students.

Method: Correlations were computed between six FOM subscores and nine Minho University grades for a sample of 90 medical students. A canonical correlation analysis was run between FOM and Minho measures.

View Article and Find Full Text PDF

Context: A test score is a number which purportedly reflects a candidate's proficiency in some clearly defined knowledge or skill domain. A test theory model is necessary to help us better understand the relationship that exists between the observed (or actual) score on an examination and the underlying proficiency in the domain, which is generally unobserved. Common test theory models include classical test theory (CTT) and item response theory (IRT).

View Article and Find Full Text PDF

Background: The UK General Medical Council has emphasized the lack of evidence on whether graduates from different UK medical schools perform differently in their clinical careers. Here we assess the performance of UK graduates who have taken MRCP(UK) Part 1 and Part 2, which are multiple-choice assessments, and PACES, an assessment using real and simulated patients of clinical examination skills and communication skills, and we explore the reasons for the differences between medical schools.

Method: We perform a retrospective analysis of the performance of 5827 doctors graduating in UK medical schools taking the Part 1, Part 2 or PACES for the first time between 2003/2 and 2005/3, and 22453 candidates taking Part 1 from 1989/1 to 2005/3.

View Article and Find Full Text PDF

Background: The ability to communicate effectively with patients is an essential element of a physician's clinical expertise.

Method: As part of the USMLE Step 2 Clinical Skills exam, standardized patients (SPs) provided ratings of communication and interpersonal skills (CIS) along three dimensions. Assessment data from a one-year (2006) cohort of graduates of international medical schools (IMGs) were analyzed and psychometric characteristics of the CIS measures are described.

View Article and Find Full Text PDF

Background: The purpose of the present study was to assess the fit of three factor analytic (FA) models with a representative set of United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) cases and examinees based on substantive considerations.

Method: Checklist, patient note, communication and interpersonal skills, as well as spoken English proficiency data were collected from 387 examinees on a set of four USMLE Step 2 CS cases. The fit of skills-based, case-based, and hybrid models was assessed.

View Article and Find Full Text PDF

Background: This study models time to passing United States Medical Licensing Examination (USMLE) for the computer-based testing (CBT) start-up cohort using the Cox Proportional Hazards Model.

Method: The number of days it took to pass Step 3 was treated as the dependent variable in the model. Covariates were: (1) gender; (2) native language (English or other); (3) medical school location (United States or other); and (4) citizenship (United States or other).

View Article and Find Full Text PDF