A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@remsenmedia.com&api_key=81853a771c3a3a2c6b2553a65bc33b056f08&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 500 Internal Server Error

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3145
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Introducing AI as members of script concordance test expert reference panel: A comparative analysis. | LitMetric

Background: The Script Concordance Test (SCT) is increasingly used in professional development to assess clinical reasoning, with linear progression in SCT performance observed as clinical experience increases. One challenge in implementing SCT is the potential burnout of expert reference panel (ERP) members. To address this, we introduced ChatGPT as panel members. The aim was to enhance the efficiency of SCT creation while maintaining educational content quality and to explore the effectiveness of different models as reference panels.

Methodology: A quasi-experimental comparative design was employed, involving all undergraduate medical students and faculty members enrolled in the Ophthalmology clerkship. Two groups involved Traditional ERP which consisted of 15 experts, diversified in clinical experience: 5 senior residents, 5 lecturers, and 5 professors and AI-Generated ERP which is a panel generated using ChatGPT and o1 preview, designed to mirror diverse clinical opinions based on varying experience levels.

Results: Experts consistently achieved the highest mean scores across most vignettes, with ChatGPT-4 and o1 scores generally slightly lower. Notably, the o1 mean scores were closer to those of experts compared to ChatGPT-4. Significant differences were observed between ChatGPT-4 and o1 scores in certain vignettes. These values indicate a strong level of consistency, suggesting that both experts and AI models provided highly reliable ratings.

Conclusion: These findings suggest that while AI models cannot replace human experts, they can be effectively used to train students, enhance reasoning skills, and help narrow the gap between student and expert performance.

Download full-text PDF

Source
http://dx.doi.org/10.1080/0142159X.2025.2473620DOI Listing

Publication Analysis

Top Keywords

script concordance
8
concordance test
8
expert reference
8
reference panel
8
clinical experience
8
scores vignettes
8
chatgpt-4 scores
8
experts
5
introducing members
4
members script
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!