Domain generalization (DG) aims to learn a model on one or multiple observed source domains that can generalize to unseen target test domains. Previous approaches have focused on extracting domain-invariant information from multiple source domains, but domain-specific information is also closely tied to semantics in individual domains and is not well-suited for generalization to the target domain. In this article, we propose a novel DG method called continuous disentangled joint space learning (CJSL), which leverages both domain-invariant and domain-specific information for more effective DG. The key idea behind CJSL is to formulate and learn a continuous joint space (CJS) for domain-specific representations from source domains through iterative feature disentanglement. This learned CJS can then be used to simulate domain-specific representations for test samples from a mixture of multiple domains via Monte Carlo sampling during the inference stage. Unlike existing approaches, which exploit domain-invariant feature vectors only or aim to learn a universal domain-specific feature extractor, we simulate domain-specific representations via sampling the latent vectors in the learned CJS for the test sample to fully use the power of multiple domain-specific classifiers for robust prediction. Empirical results demonstrate that CJSL outperforms 19 state-of-the-art (SOTA) methods on seven benchmarks, indicating the effectiveness of our proposed method.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3454689DOI Listing

Publication Analysis

Top Keywords

joint space
12
source domains
12
domain-specific representations
12
continuous disentangled
8
disentangled joint
8
space learning
8
domain generalization
8
learned cjs
8
simulate domain-specific
8
domain-specific
7

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!