Gradual Domain Adaptation via Normalizing Flows.

Neural Comput

Department of Advanced Data Science, Institute of Statistical Mathematics, Tachikawa, Tokyo 190-8562, Japan

Published: January 2025

Standard domain adaptation methods do not work well when a large gap exists between the source and target domains. Gradual domain adaptation is one of the approaches used to address the problem. It involves leveraging the intermediate domain, which gradually shifts from the source domain to the target domain. In previous work, it is assumed that the number of intermediate domains is large and the distance between adjacent domains is small; hence, the gradual domain adaptation algorithm, involving self-training with unlabeled data sets, is applicable. In practice, however, gradual self-training will fail because the number of intermediate domains is limited and the distance between adjacent domains is large. We propose the use of normalizing flows to deal with this problem while maintaining the framework of unsupervised domain adaptation. The proposed method learns a transformation from the distribution of the target domains to the gaussian mixture distribution via the source domain. We evaluate our proposed method by experiments using real-world data sets and confirm that it mitigates the problem we have explained and improves the classification performance.

Download full-text PDF

Source
http://dx.doi.org/10.1162/neco_a_01734DOI Listing

Publication Analysis

Top Keywords

domain adaptation
20
gradual domain
12
normalizing flows
8
domain
8
target domains
8
source domain
8
number intermediate
8
intermediate domains
8
domains large
8
distance adjacent
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!