Publications by authors named "Tristan Payer"

Background: Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images.

View Article and Find Full Text PDF
Article Synopsis
  • Deep learning has the potential to improve medical imaging by reducing diagnostic errors and radiologist workload, but requires large annotated datasets for training, which are often scarce.
  • Self-supervised learning methods allow models to be pre-trained on large unannotated datasets, making it feasible to fine-tune them with smaller annotated datasets for specific tasks.
  • The study compares two self-supervised pre-training methods, finding that the masked autoencoder approach "SparK" is more effective and robust than contrastive methods when working with limited annotated data in medical imaging.
View Article and Find Full Text PDF

Purpose: Semantic segmentation is one of the most significant tasks in medical image computing, whereby deep neural networks have shown great success. Unfortunately, supervised approaches are very data-intensive, and obtaining reliable annotations is time-consuming and expensive. Sparsely labeled approaches, such as bounding boxes, have shown some success in reducing the annotation time.

View Article and Find Full Text PDF