Background: While faculty have previously been shown to have high levels of agreement about the competitiveness of emergency medicine (EM) standardized letters of evaluation (SLOEs), reviewing SLOEs remains a highly time-intensive process for faculty. Artificial intelligence large language models (LLMs) have shown promise for effectively analyzing large volumes of data across a variety of contexts, but their ability to interpret SLOEs is unknown.
Objective: The objective was to evaluate the ability of LLMs to rate EM SLOEs on competitiveness compared to faculty consensus and previously developed algorithms.
Methods: Fifty mock SLOE letters were drafted and analyzed seven times by a data-focused LLM with instructions to rank them based on desirability for residency. The LLM was also asked to use its own criteria to decide which characteristics are most important for residency and revise its ranking of the SLOEs. LLM-generated rank lists were compared with faculty consensus rankings.
Results: There was a high degree of correlation ( 0.96) between the rank list initially generated by LLM consensus and the rank list generated by trained faculty. The correlation between the revised list generated by the LLM and the faculty consensus was lower ( 0.86).
Conclusions: The LLM generated rankings showed strong correlation with expert faculty consensus rankings with minimal input of faculty time and effort.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628426 | PMC |
http://dx.doi.org/10.1002/aet2.11052 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!