A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Gender Bias in Artificial Intelligence-Written Letters of Reference. | LitMetric

Objective: Letters of reference (LORs) play an important role in postgraduate residency applications. Human-written LORs have been shown to carry implicit gender bias, such as using more agentic versus communal words for men, and more frequent doubt-raisers and references to appearance and personal life for women. This can result in inequitable access to residency opportunities for women. Given the known gendered language often unconsciously inserted into human-written LORs, we sought to identify whether LORs generated by artificial intelligence exhibit gender bias.

Study Design: Observational study.

Setting: Multicenter academic collaboration.

Methods: Prompts describing identical men and women applying for Otolaryngology residency positions were created and provided to ChatGPT to generate LORs. These letters were analyzed using a gender-bias calculator which assesses the proportion of male- versus female-associated words.

Results: Regardless of the gender, school, research, or other activities, all LORs generated by ChatGPT showed a bias toward male-associated words. There was no significant difference between the percentage of male-biased words in letters written for women versus men (39.15 vs 37.85, P = .77). There were significant differences in gender bias found by each of the other discrete variables (school, research, and other activities) chosen.

Conclusion: While ChatGPT-generated LORs all showed a male bias in the language used, there was no gender bias difference in letters produced using traditionally masculine versus feminine names and pronouns. Other variables did induce gendered language, however. ChatGPT is a promising tool for LOR drafting, but users must be aware of potential biases introduced or propagated through these technologies.

Download full-text PDF

Source
http://dx.doi.org/10.1002/ohn.806DOI Listing

Publication Analysis

Top Keywords

gender bias
16
letters reference
8
human-written lors
8
gendered language
8
lors generated
8
school activities
8
lors
7
gender
6
letters
5
bias
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!