Publications by authors named "J E Alderman"

Article Synopsis
  • There is a significant risk of reinforcing existing health inequalities in AI health technologies due to biases, primarily stemming from the datasets used.
  • The STANDING Together recommendations focus on transparency in health datasets and proactive evaluation of their impacts on different population groups, informed by a comprehensive research process with over 350 global contributors.
  • The 29 recommendations are divided into guidance for documenting health datasets and strategies for using them, aiming to identify and reduce algorithmic biases while promoting awareness of the inherent limitations in all datasets.
View Article and Find Full Text PDF
Article Synopsis
  • This review analyzes various mammography datasets used for AI development in breast cancer screening, focusing on their transparency, content, and accessibility.
  • A search identified 254 datasets, with only 28 being accessible; most datasets came from Europe, East Asia, and North America, raising concerns over poor demographic representation.
  • The findings highlight significant gaps in diversity within these datasets, underscoring the need for better documentation and inclusivity to enhance the effectiveness of AI technologies in breast cancer research.
View Article and Find Full Text PDF
Article Synopsis
  • During the COVID-19 pandemic, AI models were developed to help with health-care resource issues, but previous studies showed that the datasets used often have limitations leading to biased outcomes.
  • A systematic review analyzed 192 healthcare datasets from MEDLINE and Google Dataset Search, focusing on metadata completeness, accessibility, and ethical considerations.
  • Results indicated significant shortfalls, including that only 48% showed the country of origin, 43% reported age, and under 25% included demographic factors like sex or race, emphasizing the need for improved data quality and transparency to avoid bias in future AI health applications.
View Article and Find Full Text PDF

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access.

View Article and Find Full Text PDF