Publications by authors named "Negar Rostamzadeh"

Article Synopsis
  • There is a significant risk of reinforcing existing health inequalities in AI health technologies due to biases, primarily stemming from the datasets used.
  • The STANDING Together recommendations focus on transparency in health datasets and proactive evaluation of their impacts on different population groups, informed by a comprehensive research process with over 350 global contributors.
  • The 29 recommendations are divided into guidance for documenting health datasets and strategies for using them, aiming to identify and reduce algorithmic biases while promoting awareness of the inherent limitations in all datasets.
View Article and Find Full Text PDF

Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM.

View Article and Find Full Text PDF

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access.

View Article and Find Full Text PDF