A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations. | LitMetric

Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations.

Lancet Digit Health

University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK; Centre for Patient Reported Outcomes Research, School of Health Sciences, College of Medical and Dental Sciences, Birmingham, UK; University of Birmingham, Birmingham, UK. Electronic address:

Published: January 2025

AI Article Synopsis

  • There is a significant risk of reinforcing existing health inequalities in AI health technologies due to biases, primarily stemming from the datasets used.
  • The STANDING Together recommendations focus on transparency in health datasets and proactive evaluation of their impacts on different population groups, informed by a comprehensive research process with over 350 global contributors.
  • The 29 recommendations are divided into guidance for documenting health datasets and strategies for using them, aiming to identify and reduce algorithmic biases while promoting awareness of the inherent limitations in all datasets.

Article Abstract

Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668905PMC
http://dx.doi.org/10.1016/S2589-7500(24)00224-3DOI Listing

Publication Analysis

Top Keywords

health datasets
16
health
8
standing consensus
8
recommendations
8
consensus recommendations
8
health inequalities
8
standing recommendations
8
tackling algorithmic
4
algorithmic bias
4
bias promoting
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!