A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Artificial Intelligence Compared to Manual Selection of Prospective Surgical Residents. | LitMetric

Background: Artificial Intelligence (AI) in the selection of residency program applicants is a new tool that is gaining traction, with the aim of screening high numbers of applicants while introducing objectivity and mitigating bias in a traditionally subjective process. This study aims to compare applicants screened by an AI software to a single Program Director (PD) for interview selection.

Methods: A single PD at an ACGME-accredited, academic general surgery program screened applicants. A parallel screen by AI software, programmed by the same PD, was conducted on the same pool of applicants. Weighted preferences were assigned in the following order: personal statement, research, medical school rankings, letters of recommendation, personal qualities, board scores, graduate degree, geographic preference, past experiences, program signal, honor society membership, and multilingualism. Statistical analyses were conducted by chi-square, ANOVA, and independent two-sided t-tests.

Results: Out of 1235 applications, 144 applications were PD-selected and 150 AI-selected (294 top applications). Twenty applications (7.3%) were both PD and AI selected for a total analysis cohort of 274 prospective residents. We performed two analyses: 1) PD-selected vs. AI-selected vs. Both and 2) PD-selected vs. AI-selected with the overlapping applicants censored. For the first analysis, AI selected significantly: more White/Hispanic applicants (p < 0.001), less signals (p < 0.001), more AOA honors society (p = 0.016), and more publications (p < 0.001). When censoring overlapping PD and AI selection, AI selected significantly: more White/Hispanic applicants (p < 0.001), less signals (p < 0.001), more US medical graduates (p = 0.027), less applicants needing visa sponsorship (p = 0.01), younger applicants (p = 0.024), higher USMLE Step 2 CK scores (p < 0.001), and more publications (p < 0.001).

Conclusions: There was only a 7% overlap between PD-selected and AI-selected applicants for interview screening in the same applicant pool. Despite the same PD educating the AI software, the 2 application pools differed significantly. In its present state, AI may be utilized as a tool in resident application selection but should not completely replace human review. We recommend careful analysis of the performance of each AI model in the respective environment of each institution applying it, as it may alter the group of interviewees.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jsurg.2024.103308DOI Listing

Publication Analysis

Top Keywords

pd-selected ai-selected
12
applicants
11
artificial intelligence
8
selected white/hispanic
8
white/hispanic applicants
8
applicants 0001
8
0001 signals
8
signals 0001
8
0001
6
intelligence compared
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!