Examinations are essential in assessing student learning in medical education. Ensuring the quality of exam questions is a highly challenging yet necessary task to assure that assessments are equitable, reliable, and aptly gauge student learning. The aim of this study was to investigate whether the incorporation of student feedback can enhance the quality of exam questions in the Renal and Urinary System course, offered to second-year medical students. Using a single-arm between-person survey-based design, we conducted an a priori power analysis to establish the sample size. The exam comprised 100 multiple-choice questions written by a panel of 31 instructors. A total of 125 medical students took the exam in 2021. Following the exam, student feedback was collected, resulting in the revision of 12 questions by two subject experts. In the following year, the revised questions were administered to a new cohort of 125 second-year medical students. We used Fisher's z-transformation to test the significance of differences in point-biserial correlations between the 2021 and 2022 cohorts. The results reveal that 66% of the revised exam questions exhibited significantly higher point-biserial correlations. This demonstrates the positive impact of involving students in the exam revision process. Their feedback enhances question clarity, relevance, alignment with learning objectives, and overall quality. In conclusion, student participation in exam evaluation and revision can improve the quality of exam questions. This approach capitalises on students experiences and feedback and complements the traditional approaches to ensure the quality of exam questions, benefiting both the institution and its learners.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11100433 | PMC |
http://dx.doi.org/10.1080/28338073.2024.2352964 | DOI Listing |
Cureus
December 2024
Internal Medicine, Ross University School of Medicine, Saint Michael, BRB.
Purpose: The integration of artificial intelligence (AI) into medical education has witnessed significant progress, particularly in the domain of language models. This study focuses on assessing the performance of two notable language models, ChatGPT and BingAI Precise, in answering the National Eligibility Entrance Test for Postgraduates (NEET-PG)-style practice questions, simulating medical exam formats.
Methods: A cross-sectional study conducted in June 2023 involved assessing ChatGPT and BingAI Precise using three sets of NEET-PG practice exams, comprising 200 questions each.
Front Med (Lausanne)
January 2025
Department of Psychoanalysis and Psychotherapy, Medical University of Vienna, Vienna, Austria.
Background: The integration of interdisciplinary clinical reasoning and decision-making into the medical curriculum is imperative. Novel, high-quality e-learning environments, encompassing virtual clinical and hands-on training, are essential. Consequently, we evaluated the efficacy of a case-based e-learning approach.
View Article and Find Full Text PDFMed Teach
January 2025
Institute of Medical Education, University Hospital Bonn, Bonn, Germany.
Purpose: Self-testing has been proven to significantly improve not only simple learning outcomes, but also higher-order skills such as clinical reasoning in medical students. Previous studies have shown that self-testing was especially beneficial when it was presented with feedback, which leaves the question whether an immediate and personalized feedback further encourages this effect. Therefore, we hypothesised that individual feedback has a greater effect on learning outcomes, compared to generic feedback.
View Article and Find Full Text PDFInt J Ophthalmol
January 2025
Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai 200080, China.
Aim: To assess the possibility of using different large language models (LLMs) in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases: ChatGPT-4, ChatGPT-3.5, Claude 2, PaLM2, and SenseNova.
Methods: A group of experienced ophthalmology professors were asked to develop a 100-question single-choice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.
Am J Pharm Educ
January 2025
UNC Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599. Electronic address:
Objective: To compare the impact of examination feedback versus access to historical examination questions on information retention.
Methods: First-year student-pharmacists completed a baseline knowledge assessment composed of 30 examination questions divided into three conditions of 10 questions each. In the CHEAT condition, students were provided with 10 questions and their correct answers ahead of time.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!