Publications by authors named "Fengbei Liu"

3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level annotated data is a disadvantage that needs to be addressed given the high cost to obtain such annotation. Semi-supervised learning (SSL) solves this issue by training models with a large unlabelled and a small labelled dataset. The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data.

View Article and Find Full Text PDF

Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e.

View Article and Find Full Text PDF
Article Synopsis
  • Unsupervised anomaly detection (UAD) methods utilize only normal images for training but can identify both normal and abnormal images during testing, making them crucial for medical image analysis, especially when only normal images are available.
  • The challenge arises when relying solely on normal images, which may lead to ineffective representations that struggle to detect various unseen abnormalities.
  • The paper introduces a new self-supervised pre-training method, PMSACL, which improves UAD performance by leveraging multiple pseudo classes of abnormal images to create dense clusters in the feature space, resulting in better accuracy on various medical imaging benchmarks.
View Article and Find Full Text PDF

The deployment of automated deep-learning classifiers in clinical practice has the potential to streamline the diagnosis process and improve the diagnosis accuracy, but the acceptance of those classifiers relies on both their accuracy and interpretability. In general, accurate deep-learning classifiers provide little model interpretability, while interpretable models do not have competitive classification accuracy. In this paper, we introduce a new deep-learning diagnosis framework, called InterNRL, that is designed to be highly accurate and interpretable.

View Article and Find Full Text PDF