Manual annotation of medical images is highly subjective, leading to inevitable annotation biases. Deep learning models may surpass human performance on a variety of tasks, but they may also mimic or amplify these biases. Although we can have multiple annotators and fuse their annotations to reduce stochastic errors, we cannot use this strategy to handle the bias caused by annotators' preferences. In this paper, we highlight the issue of annotator-related biases on medical image segmentation tasks, and propose a Preference-involved Annotation Distribution Learning (PADL) framework to address it from the perspective of modeling an annotator's preference and stochastic errors so as to produce not only a meta segmentation but also the annotator-specific segmentation. Under this framework, a stochastic error modeling (SEM) module estimates the meta segmentation map and average stochastic error map, and a series of human preference modeling (HPM) modules estimate each annotator's segmentation and the corresponding stochastic error. We evaluated our PADL framework on two medical image benchmarks with different imaging modalities, which have been annotated by multiple medical professionals, and achieved promising performance on all five medical image segmentation tasks. Code is available at https://github.com/Merrical/PADL.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2023.103028 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!