In data collection for predictive modeling, under-representation of certain groups, based on gender, race/ethnicity, or age, may yield less-accurate predictions for these groups. Recently, this issue of fairness in predictions has attracted significant attention, as data-driven models are increasingly utilized to perform crucial decision-making tasks. Existing methods to achieve fairness in the machine learning literature typically build a single prediction model in a manner that encourages fair prediction performance for all groups. These approaches have two major limitations: i) fairness is often achieved by compromising accuracy for some groups; ii) the underlying relationship between dependent and independent variables may not be the same across groups. We propose a Joint Fairness Model (JFM) approach for logistic regression models for binary outcomes that estimates group-specific classifiers using a joint modeling objective function that incorporates fairness criteria for prediction. We introduce an Accelerated Smoothing Proximal Gradient Algorithm to solve the convex objective function, and present the key asymptotic properties of the JFM estimates. Through simulations, we demonstrate the efficacy of the JFM in achieving good prediction performance and across-group parity, in comparison with the single fairness model, group-separate model, and group-ignorant model, especially when the minority group's sample size is small. Finally, we demonstrate the utility of the JFM method in a real-world example to obtain fair risk predictions for under-represented older patients diagnosed with coronavirus disease 2019 (COVID-19).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8132236PMC

Publication Analysis

Top Keywords

fairness model
12
joint fairness
8
risk predictions
8
predictions under-represented
8
prediction performance
8
objective function
8
model
6
fairness
6
groups
5
model applications
4

Similar Publications

Enhancing equity in academic surgery promotion practices.

Surgery

January 2025

Section of Otolaryngology Head and Neck Surgery, Department of Surgery, University of Chicago, Pritzker School of Medicine, IL. Electronic address:

Background: Black, Indigenous, People of Color (BIPOC) in medicine and women faculty have lower 10-year promotion rates than their White and male peers, despite controlling for productivity metrics. Promotion standards vary across institutions, but there is likely a common need to improve transparency and consistency while mitigating bias, inequity, and the harm of additional equity work that is commonly expected of Black, Indigenous, People of Color and women faculty (the so-called minority tax).

Methods: A promotion advisory committee consisting of clinical and research faculty at all ranks specified expectations for a faculty member at the associate or full professor ranks, with 10-15 examples given for each "mission" (clinical, research, and education).

View Article and Find Full Text PDF

Background: Generative AI, particularly large language models (LLMs), holds great potential for improving patient care and operational efficiency in healthcare. However, the use of LLMs is complicated by regulatory concerns around data security and patient privacy. This study aimed to develop and evaluate a secure infrastructure that allows researchers to safely leverage LLMs in healthcare while ensuring HIPAA compliance and promoting equitable AI.

View Article and Find Full Text PDF

Background: The COVID-19 pandemic has highlighted the crucial role of artificial intelligence (AI) in predicting mortality and guiding healthcare decisions. However, AI models may perpetuate or exacerbate existing health disparities due to demographic biases, particularly affecting racial and ethnic minorities. The objective of this study is to investigate the demographic biases in AI models predicting COVID-19 mortality and to assess the effectiveness of transfer learning in improving model fairness across diverse demographic groups.

View Article and Find Full Text PDF

Objective: The purpose of this study is to investigate the effects of perceived organizational fairness, organizational identity, and trust on the intrinsic motivation for the professional development of university teachers. In addition, this study aims to verify the mediating role of organizational identity and trust.

Method: This study adopts a quantitative research methodology to investigate the relationship between perceived organizational fairness, organizational identity, trust, and intrinsic motivation in the professional development of university teachers by constructing and validating a structural equation model.

View Article and Find Full Text PDF

Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!