A Longitudinal Study of Student Feedback Integration in Medical Examination Development.

J CME

Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Oakland University, Rochester, MI, USA.

Published: May 2024

Examinations are essential in assessing student learning in medical education. Ensuring the quality of exam questions is a highly challenging yet necessary task to assure that assessments are equitable, reliable, and aptly gauge student learning. The aim of this study was to investigate whether the incorporation of student feedback can enhance the quality of exam questions in the Renal and Urinary System course, offered to second-year medical students. Using a single-arm between-person survey-based design, we conducted an a priori power analysis to establish the sample size. The exam comprised 100 multiple-choice questions written by a panel of 31 instructors. A total of 125 medical students took the exam in 2021. Following the exam, student feedback was collected, resulting in the revision of 12 questions by two subject experts. In the following year, the revised questions were administered to a new cohort of 125 second-year medical students. We used Fisher's z-transformation to test the significance of differences in point-biserial correlations between the 2021 and 2022 cohorts. The results reveal that 66% of the revised exam questions exhibited significantly higher point-biserial correlations. This demonstrates the positive impact of involving students in the exam revision process. Their feedback enhances question clarity, relevance, alignment with learning objectives, and overall quality. In conclusion, student participation in exam evaluation and revision can improve the quality of exam questions. This approach capitalises on students experiences and feedback and complements the traditional approaches to ensure the quality of exam questions, benefiting both the institution and its learners.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11100433PMC
http://dx.doi.org/10.1080/28338073.2024.2352964DOI Listing

Publication Analysis

Top Keywords

exam questions
20
quality exam
16
student feedback
12
medical students
12
exam
10
student learning
8
questions
8
second-year medical
8
students exam
8
point-biserial correlations
8

Similar Publications

Purpose: The integration of artificial intelligence (AI) into medical education has witnessed significant progress, particularly in the domain of language models. This study focuses on assessing the performance of two notable language models, ChatGPT and BingAI Precise, in answering the National Eligibility Entrance Test for Postgraduates (NEET-PG)-style practice questions, simulating medical exam formats.

Methods: A cross-sectional study conducted in June 2023 involved assessing ChatGPT and BingAI Precise using three sets of NEET-PG practice exams, comprising 200 questions each.

View Article and Find Full Text PDF

Improving students' performance via case-based e-learning.

Front Med (Lausanne)

January 2025

Department of Psychoanalysis and Psychotherapy, Medical University of Vienna, Vienna, Austria.

Background: The integration of interdisciplinary clinical reasoning and decision-making into the medical curriculum is imperative. Novel, high-quality e-learning environments, encompassing virtual clinical and hands-on training, are essential. Consequently, we evaluated the efficacy of a case-based e-learning approach.

View Article and Find Full Text PDF

Purpose: Self-testing has been proven to significantly improve not only simple learning outcomes, but also higher-order skills such as clinical reasoning in medical students. Previous studies have shown that self-testing was especially beneficial when it was presented with feedback, which leaves the question whether an immediate and personalized feedback further encourages this effect. Therefore, we hypothesised that individual feedback has a greater effect on learning outcomes, compared to generic feedback.

View Article and Find Full Text PDF

Assessing the possibility of using large language models in ocular surface diseases.

Int J Ophthalmol

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai 200080, China.

Aim: To assess the possibility of using different large language models (LLMs) in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases: ChatGPT-4, ChatGPT-3.5, Claude 2, PaLM2, and SenseNova.

Methods: A group of experienced ophthalmology professors were asked to develop a 100-question single-choice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.

View Article and Find Full Text PDF

Objective: To compare the impact of examination feedback versus access to historical examination questions on information retention.

Methods: First-year student-pharmacists completed a baseline knowledge assessment composed of 30 examination questions divided into three conditions of 10 questions each. In the CHEAT condition, students were provided with 10 questions and their correct answers ahead of time.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!