Objective: Statistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify disparities in clinical care, and there is a growing need for understanding how algorithmic biases can be mitigated in pursuit of algorithmic fairness. Individual fairness in algorithms constrains algorithms to the notion that "similar individuals should be treated similarly." We conducted a scoping review on algorithmic individual fairness to understand the current state of research in the metrics and methods developed to achieve individual fairness and its applications in healthcare.
Methods: We searched three databases, PubMed, ACM Digital Library, and IEEE Xplore, for algorithmic individual fairness metrics, algorithmic bias mitigation, and healthcare applications. Our search was restricted to articles published between January 2013 and September 2023. We identified 1,886 articles through database searches and manually identified one article from which we included 30 articles in the review. Data from the selected articles were extracted, and the findings were synthesized.
Results: Based on the 30 articles in the review, we identified several themes, including philosophical underpinnings of fairness, individual fairness metrics, mitigation methods for achieving individual fairness, implications of achieving individual fairness on group fairness and vice versa, fairness metrics that combined individual fairness and group fairness, software for measuring and optimizing individual fairness, and applications of individual fairness in healthcare.
Conclusion: While there has been significant work on algorithmic individual fairness in recent years, the definition, use, and study of individual fairness remain in their infancy, especially in healthcare. Future research is needed to apply and evaluate individual fairness in healthcare comprehensively.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10996729 | PMC |
http://dx.doi.org/10.1101/2024.03.25.24304853 | DOI Listing |
PLoS One
January 2025
Center for International Education and Exchange, Osaka University, Suita, Osaka, Japan.
Background: Artificial intelligence (AI) is anticipated to play a significant role in criminal trials involving citizen jurors. Prior studies have suggested that AI is not widely preferred in ethical decision-making contexts, but little research has compared jurors' reliance on judgments by human judges versus AI in such settings.
Objectives: This study examined whether jurors are more likely to defer to judgments by human judges or AI, especially in cases involving mitigating circumstances in which human-like reasoning may be valued.
BMC Med Educ
January 2025
University of Minnesota Medical School, 420 Delaware Street SE, Mayo Building, Minneapolis, MN, 55455, USA.
Background: A common practice in assessment development, fundamental for fairness and consequently the validity of test score interpretations and uses, is to ascertain whether test items function equally across test-taker groups. Accordingly, we conducted differential item functioning (DIF) analysis, a psychometric procedure for detecting potential item bias, for three preclinical medical school foundational courses based on students' sex and race.
Methods: The sample included 520, 519, and 344 medical students for anatomy, histology, and physiology, respectively, collected from 2018 to 2020.
J Racial Ethn Health Disparities
January 2025
Department of Biomedical Informatics, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
Context: To evaluate algorithmic fairness in low birthweight predictive models.
Study Design: This study analyzed insurance claims (n = 9,990,990; 2013-2021) linked with birth certificates (n = 173,035; 2014-2021) from the Arkansas All Payers Claims Database (APCD).
Methods: Low birthweight (< 2500 g) predictive models included four approaches (logistic, elastic net, linear discriminate analysis, and gradient boosting machines [GMB]) with and without racial/ethnic information.
Sci Rep
January 2025
Department of Urology, Vanderbilt University Medical Center, Nashville, USA.
Recent advancements of large language models (LLMs) like generative pre-trained transformer 4 (GPT-4) have generated significant interest among the scientific community. Yet, the potential of these models to be utilized in clinical settings remains largely unexplored. In this study, we investigated the abilities of multiple LLMs and traditional machine learning models to analyze emergency department (ED) reports and determine if the corresponding visits were due to symptomatic kidney stones.
View Article and Find Full Text PDFFront Public Health
January 2025
School of Journalism and Communication, Guangxi University, Nanning, China.
With the development of social media platforms such as Weibo, they have provided a broad platform for the expression of public sentiments during the pandemic. This study aims to explore the emotional attitudes of Chinese netizens toward the COVID-19 opening-up policies and their related thematic characteristics. Using Python, 145,851 texts were collected from the Weibo platform.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!