Purpose: Written examinations such as multiple-choice question (MCQ) exams are a key assessment strategy in health professions education (HPE), frequently used to provide feedback, to determine competency, or for licensure decisions. However, traditional psychometric approaches for monitoring the quality of written exams, defined as items that are discriminant and contribute to increase the overall reliability and validity of the exam scores, usually warrant larger samples than are typically available in HPE contexts. The authors conducted a descriptive exploratory study to document how undergraduate medical education (UME) programs ensure the quality of their written exams, particularly MCQs.
Method: Using a qualitative descriptive methodology, the authors conducted semistructured interviews with 16 key informants from 10 Canadian UME programs in 2018. Interviews were transcribed, anonymized, coded by the primary investigator, and co-coded by a second team member. Data collection and analysis were conducted iteratively. Research team members engaged in analysis across phases, and consensus was reached on the interpretation of findings via group discussion.
Results: Participants focused their answers around MCQ-related practices, reporting using several indicators of quality such as alignment between items and course objectives and psychometric properties (difficulty and discrimination). The authors clustered findings around 5 main themes: processes for creating MCQ exams, processes for building quality MCQ exams, processes for monitoring the quality of MCQ exams, motivation to build quality MCQ exams, and suggestions for improving processes.
Conclusions: Participants reported engaging multiple strategies to ensure the quality of MCQ exams. Assessment quality considerations were integrated throughout the development and validation phases, reflecting recent work regarding validity as a social imperative.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/ACM.0000000000003659 | DOI Listing |
J Pers Med
December 2024
Department of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark.
Artificial intelligence (AI) is becoming increasingly influential in ophthalmology, particularly through advancements in machine learning, deep learning, robotics, neural networks, and natural language processing (NLP). Among these, NLP-based chatbots are the most readily accessible and are driven by AI-based large language models (LLMs). These chatbots have facilitated new research avenues and have gained traction in both clinical and surgical applications in ophthalmology.
View Article and Find Full Text PDFBMC Med Educ
December 2024
Medical Education Department, Faculty of Medicine, Dar Al Uloom University, Riyadh, Saudi Arabia.
Aim: Whether case-based modified essay questions (MEQs) are crucial to summative assessment in medical curriculum is still debatable. The current study aimed to evaluate third-year medical students' performance in case-based MEQs and multiple-choice questions (MCQs) in summative assessment in the endocrine module.
Methods: Students' scores in mid and final module MEQs and MCQs were analyzed over four successive years from 2018/2019 to 2021/2022, where comparisons were made between students' scores in MEQs and MCQs, and between scores of students of different categories.
J Vet Med Educ
December 2024
Centre for E-Learning, Didactics and Educational Research, University of Veterinary Medicine Hannover, Bünteweg 2, 30559 Hannover, Germany.
Since 2008, electronic examinations have been conducted at the University of Veterinary Medicine Hannover, Germany which are analyzed extensively in the current study. The aim is to assess the quality of examinations, the status quo of the electronic examination system and the implementation of recommendations regarding the conduct of exams at the TiHo. Based on the results suitable indicators for the evaluation of examinations and items as well as adequate quality assurance measures and item formats are to be identified.
View Article and Find Full Text PDFAcad Med
September 2024
P. Boedeker is assistant professor, Department of Education, Innovation, and Technology, Baylor College of Medicine, Houston, Texas; ORCID: https://orcid.org/my-orcid?orcid=0000-0002-0879-5886.
Problem: High-stakes multiple-choice question (MCQ) exams in medical education typically focus on assessment of learning at a single point without providing feedback for improvement. Educators can achieve a more balanced approach to MCQ exams by combining efficient assessment of learning with the feedback and improvement opportunities of assessment for learning.
Approach: As part of a curriculum renewal at Baylor College of Medicine's MD program, the Two-Phase Individual Assessment (TPIA) model was launched within a 4-week preclinical Foundations of Medicine course in August 2023.
Objective: The objective was to compare the average number of mistakes made on multiple-choice (MCQ) and fill-in-the-blank (FIB) questions in anatomy lab exams.
Methods: The study was conducted retrospectively; every exam had both MCQs and FIBs. The study cohorts were divided into 3 tiers based on the number and percentage of mistakes in answering sheets: low (21-32, >40%), middle (11-20, 40%-20%), and high (1-9, <20%) tiers.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!