The Medical Education Research Study Quality Instrument (MERSQI) is widely used to evaluate the quality of quantitative research in medical education. It has strong evidence of validity and is endorsed by guidelines. However, the manual appraisal process is time-consuming and resource-intensive, highlighting the need for more efficient methods. We propose to use ChatGPT to evaluate the quality of medical education research with the MERSQI and compare its scoring with those of human evaluators. Using ChatGPT to evaluate medical education research with the MERSQI can decrease the resources required for quality appraisal. This allows faster summaries of evidence, reducing the workload of researchers, editors, and educators. Furthermore, ChatGPTs' capability to extract supporting excerpts provides transparency and may have the potential for data extraction and training new medical education researchers. We plan to continue evaluating medical education research with ChatGPT using the MERSQI and other instruments to determine its feasibility in this realm. Moreover, we plan to investigate which types of studies ChatGPT performs best in.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/0142159X.2024.2385678 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!