Introduction: In recent years, podcasts have been increasingly deployed in medical education. However, studies often fail to evaluate the learning outcomes from these podcasts effectively. The aim of this study was to determine whether the active production of podcasts enhances students' knowledge compared to the passive consumption of student-produced podcasts, as it increases the engagement with the learning content through active learning.
View Article and Find Full Text PDFBackground: Serious games are risk-free environments training various medical competencies, such as clinical reasoning, without endangering patients' safety. Furthermore, serious games provide a context for training situations with unpredictable outcomes. Training these competencies is particularly important for healthcare professionals in emergency medicine.
View Article and Find Full Text PDFBackground: Artificial intelligence (AI) is becoming increasingly important in healthcare. It is therefore crucial that today's medical students have certain basic AI skills that enable them to use AI applications successfully. These basic skills are often referred to as "AI literacy".
View Article and Find Full Text PDFSerious games, as a learning resource, enhance their game character by embedding game design elements that are typically used in entertainment games. Serious games in its entirety have already proven their teaching effectiveness in different educational contexts including medical education. The embedded game design elements play an essential role for a game's effectiveness and thus they should be selected based on evidence-based theories.
View Article and Find Full Text PDFProblem: Creating medical exam questions is time consuming, but well-written questions can be used for test-enhanced learning, which has been shown to have a positive effect on student learning. The automated generation of high-quality questions using large language models (LLMs), such as ChatGPT, would therefore be desirable. However, there are no current studies that compare students' performance on LLM-generated questions to questions developed by humans.
View Article and Find Full Text PDF