The provision of courses in complementary and integrative medicine (CIM) varies widely between medical schools. To effectively improve CIM education, it is essential to use robust evaluation instruments that measure the impact of different educational interventions. This review aimed to identify and critically appraise qualitative and quantitative instruments used to evaluate CIM courses in undergraduate medical education. A systematic review was conducted in PubMed/MEDLINE, LIVIVO, CINAHL/EBSCO, Scopus, Web of Science, and Ovid/Embase in January 2023. Eligible studies included complete evaluation instruments for medical students and reported learning outcomes. Data extraction included information on the study design, the educational intervention, the evaluation instrument, and the outcome measure (e.g., Kirkpatrick levels: 1 reaction, 2a attitudes, 2b knowledge/skills, 3 behavioral change, 4 results). Instruments were categorized as validated, nonvalidated, or qualitative and analyzed using descriptive statistics. Validated instruments were assessed for quality using standardized criteria. Of the 1909 records identified, 263 were subjected to a full-text review and 100 studies met the inclusion criteria. Twenty-seven studies reported on 14 validated instruments, 7 studies reported on qualitative, and 66 reported on nonvalidated instruments. Most were conducted in the United States (31) and Europe (28), 51 were cross-sectional studies, and 42 were intervention studies. Most of the instruments were self-administered (50), addressed general aspects of CIM (53), and assessed student attitudes (74). None of the validated instruments covered Kirkpatrick level 1, one covered level 3. Measurement of levels 2b and 3 was usually based on subjective self-assessment. Qualitative instruments covered the widest range of outcomes overall. Validated instruments often had good content validity and internal consistency, but lacked reliability and responsiveness. Revalidation of translated or modified instruments was mostly inadequate. This structured and comprehensive set of existing instruments provides a starting point for the further development of CIM course evaluation in undergraduate medical education. Future studies should prioritize the measurement of higher-level learning outcomes, such as behavioral change and impact on patient care. Comparative intervention studies between medical schools or with pre-post designs and follow-up evaluations are needed to assess the effectiveness of different teaching approaches. Regular revalidation of both existing and newly developed instruments is essential to ensure their applicability to different audiences and settings. Their structured and standardized use would promote evidence-based CIM training and understanding of its impact on student competencies and patient outcomes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1089/jicm.2024.0614 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!