Unsupervised open-set domain adaptation (UODA) is a realistic problem where unlabeled target data contain unknown classes. Prior methods rely on the coexistence of both source and target domain data to perform domain alignment, which greatly limits their applications when source domain data are restricted due to privacy concerns. In this paper we address the challenging hypothesis transfer setting for UODA, where data from source domain are no longer available during adaptation on target domain. Specifically, we propose to use pseudo-labels and a novel consistency regularization on target data, where using conventional formulations fails in this open-set setting. Firstly, our method discovers confident predictions on target domain and performs classification with pseudo-labels. Then we enforce the model to output consistent and definite predictions on semantically similar transformed inputs, discovering all latent class semantics. As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes. We theoretically prove that under perfect semantic transformation, the proposed objective that enforces consistency can recover the information of true labels in prediction. Experimental results show that our model outperforms state-of-the-art methods on UODA benchmarks.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3093393DOI Listing

Publication Analysis

Top Keywords

target domain
12
hypothesis transfer
8
target data
8
unknown classes
8
domain data
8
source domain
8
domain
7
data
6
target
5
open-set hypothesis
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!