AI Article Synopsis

  • AI models that excel in clinical settings may not perform well in different locations due to variability in data and practices.
  • The article discusses two main sources of this performance issue: those that researchers can control and those that arise naturally from the way clinical data is generated.
  • It specifically explores how unique clinical practices at different sites can alter data distribution and suggests a method to distinguish these influences from the actual disease patterns that AI models aim to analyze.

Article Abstract

The rising popularity of artificial intelligence in healthcare is highlighting the problem that a computational model achieving super-human clinical performance at its training sites may perform substantially worse at new sites. In this perspective, we argue that we should typically expect this failure to transport, and we present common sources for it, divided into those under the control of the experimenter and those inherent to the clinical data-generating process. Of the inherent sources we look a little deeper into site-specific clinical practices that can affect the data distribution, and propose a potential solution intended to isolate the imprint of those practices on the data from the patterns of disease cause and effect that are the usual target of probabilistic clinical models.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10907678PMC
http://dx.doi.org/10.1038/s41746-024-01037-4DOI Listing

Publication Analysis

Top Keywords

probabilistic clinical
8
clinical models
8
models fail
4
fail transport
4
transport sites
4
sites rising
4
rising popularity
4
popularity artificial
4
artificial intelligence
4
intelligence healthcare
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!