Unsupervised domain adaptation (UDA) has shown impressive performance by improving the generalizability of the model to tackle the domain shift problem for cross-modality medical segmentation. However, most of the existing UDA approaches depend on high-quality image translation with diversity constraints to explicitly augment the potential data diversity, which is hard to ensure semantic consistency and capture domain-invariant representation. In this paper, free of image translation and diversity constraints, we propose a novel Style Mixup Enhanced Disentanglement Learning (SMEDL) for UDA medical image segmentation to further improve domain generalization and enhance domain-invariant learning ability. Firstly, our method adopts disentangled style mixup to implicitly generate style-mixed domains with diverse styles in the feature space through a convex combination of disentangled style factors, which can effectively improve the model generalization. Meanwhile, we further introduce pixel-wise consistency regularization to ensure the effectiveness of style-mixed domains and provide domain consistency guidance. Secondly, we introduce dual-level domain-invariant learning, including intra-domain contrastive learning and inter-domain adversarial learning to mine the underlying domain-invariant representation under both intra- and inter-domain variations. We have conducted comprehensive experiments to evaluate our method on two public cardiac datasets and one brain dataset. Experimental results demonstrate that our proposed method achieves superior performance compared to the state-of-the-art methods for UDA medical image segmentation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2024.103440 | DOI Listing |
Med Image Anal
December 2024
Department of Electrical Engineering, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA. Electronic address:
Unsupervised domain adaptation (UDA) has shown impressive performance by improving the generalizability of the model to tackle the domain shift problem for cross-modality medical segmentation. However, most of the existing UDA approaches depend on high-quality image translation with diversity constraints to explicitly augment the potential data diversity, which is hard to ensure semantic consistency and capture domain-invariant representation. In this paper, free of image translation and diversity constraints, we propose a novel Style Mixup Enhanced Disentanglement Learning (SMEDL) for UDA medical image segmentation to further improve domain generalization and enhance domain-invariant learning ability.
View Article and Find Full Text PDFIEEE Trans Med Imaging
August 2024
Deep learning models for medical image analysis easily suffer from distribution shifts caused by dataset artifact bias, camera variations, differences in the imaging station, etc., leading to unreliable diagnoses in real-world clinical settings. Domain generalization (DG) methods, which aim to train models on multiple domains to perform well on unseen domains, offer a promising direction to solve the problem.
View Article and Find Full Text PDFIEEE Trans Image Process
August 2022
Fine-grained visual classification can be addressed by deep representation learning under supervision of manually pre-defined targets (e.g., one-hot or the Hadamard codes).
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
November 2021
Motion recognition based on surface electromyogram (sEMG) recorded from the forearm is attracting attention for its applicability because it easily integrates with wearable devices and has a high signal-to-noise ratio. Inter-subject variability and inadequate data availability are common problems encountered in classifiers. Transfer learning (TL) techniques can reduce the inter-subject variability; however, when the amount of data recorded from each source subject is small, the TL-combined classifier is prone to overfitting problems.
View Article and Find Full Text PDFSensors (Basel)
May 2020
Institute of Communications Engineering, Ulm University, Albert-Einstein-Allee 43, 89081 Ulm, Germany.
Human-machine addressee detection (H-M AD) is a modern paralinguistics and dialogue challenge that arises in multiparty conversations between several people and a spoken dialogue system (SDS) since the users may also talk to each other and even to themselves while interacting with the system. The SDS is supposed to determine whether it is being addressed or not. All existing studies on acoustic H-M AD were conducted on corpora designed in such a way that a human addressee and a machine played different dialogue roles.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!