Objective: Statistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify disparities in clinical care, and there is a growing need for understanding how algorithmic biases can be mitigated in pursuit of algorithmic fairness. Individual fairness in algorithms constrains algorithms to the notion that "similar individuals should be treated similarly." We conducted a scoping review on algorithmic individual fairness to understand the current state of research in the metrics and methods developed to achieve individual fairness and its applications in healthcare.

Methods: We searched three databases, PubMed, ACM Digital Library, and IEEE Xplore, for algorithmic individual fairness metrics, algorithmic bias mitigation, and healthcare applications. Our search was restricted to articles published between January 2013 and September 2023. We identified 1,886 articles through database searches and manually identified one article from which we included 30 articles in the review. Data from the selected articles were extracted, and the findings were synthesized.

Results: Based on the 30 articles in the review, we identified several themes, including philosophical underpinnings of fairness, individual fairness metrics, mitigation methods for achieving individual fairness, implications of achieving individual fairness on group fairness and vice versa, fairness metrics that combined individual fairness and group fairness, software for measuring and optimizing individual fairness, and applications of individual fairness in healthcare.

Conclusion: While there has been significant work on algorithmic individual fairness in recent years, the definition, use, and study of individual fairness remain in their infancy, especially in healthcare. Future research is needed to apply and evaluate individual fairness in healthcare comprehensively.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10996729PMC
http://dx.doi.org/10.1101/2024.03.25.24304853DOI Listing

Publication Analysis

Top Keywords

individual fairness
56
fairness
19
algorithmic individual
16
individual
13
fairness metrics
12
fairness healthcare
8
scoping review
8
fairness individual
8
fairness applications
8
articles review
8

Similar Publications

Background: Artificial intelligence (AI) is anticipated to play a significant role in criminal trials involving citizen jurors. Prior studies have suggested that AI is not widely preferred in ethical decision-making contexts, but little research has compared jurors' reliance on judgments by human judges versus AI in such settings.

Objectives: This study examined whether jurors are more likely to defer to judgments by human judges or AI, especially in cases involving mitigating circumstances in which human-like reasoning may be valued.

View Article and Find Full Text PDF

Background: A common practice in assessment development, fundamental for fairness and consequently the validity of test score interpretations and uses, is to ascertain whether test items function equally across test-taker groups. Accordingly, we conducted differential item functioning (DIF) analysis, a psychometric procedure for detecting potential item bias, for three preclinical medical school foundational courses based on students' sex and race.

Methods: The sample included 520, 519, and 344 medical students for anatomy, histology, and physiology, respectively, collected from 2018 to 2020.

View Article and Find Full Text PDF

Context: To evaluate algorithmic fairness in low birthweight predictive models.

Study Design: This study analyzed insurance claims (n = 9,990,990; 2013-2021) linked with birth certificates (n = 173,035; 2014-2021) from the Arkansas All Payers Claims Database (APCD).

Methods: Low birthweight (< 2500 g) predictive models included four approaches (logistic, elastic net, linear discriminate analysis, and gradient boosting machines [GMB]) with and without racial/ethnic information.

View Article and Find Full Text PDF

Recent advancements of large language models (LLMs) like generative pre-trained transformer 4 (GPT-4) have generated significant interest among the scientific community. Yet, the potential of these models to be utilized in clinical settings remains largely unexplored. In this study, we investigated the abilities of multiple LLMs and traditional machine learning models to analyze emergency department (ED) reports and determine if the corresponding visits were due to symptomatic kidney stones.

View Article and Find Full Text PDF

With the development of social media platforms such as Weibo, they have provided a broad platform for the expression of public sentiments during the pandemic. This study aims to explore the emotional attitudes of Chinese netizens toward the COVID-19 opening-up policies and their related thematic characteristics. Using Python, 145,851 texts were collected from the Weibo platform.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!