Publications by authors named "J Alderman"

Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups.

View Article and Find Full Text PDF
Article Synopsis
  • This review analyzes various mammography datasets used for AI development in breast cancer screening, focusing on their transparency, content, and accessibility.
  • A search identified 254 datasets, with only 28 being accessible; most datasets came from Europe, East Asia, and North America, raising concerns over poor demographic representation.
  • The findings highlight significant gaps in diversity within these datasets, underscoring the need for better documentation and inclusivity to enhance the effectiveness of AI technologies in breast cancer research.
View Article and Find Full Text PDF

During the COVID-19 pandemic, artificial intelligence (AI) models were created to address health-care resource constraints. Previous research shows that health-care datasets often have limitations, leading to biased AI technologies. This systematic review assessed datasets used for AI development during the pandemic, identifying several deficiencies.

View Article and Find Full Text PDF

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access.

View Article and Find Full Text PDF