On the benefits of representation regularization in invariance based domain generalization.

Mach Learn

Canada CIFAR AI Chair, Mila, Université Laval, Quebec City, G1V 0A6 Canada.

Published: January 2022

A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9012768PMC
http://dx.doi.org/10.1007/s10994-021-06080-wDOI Listing

Publication Analysis

Top Keywords

domain generalization
16
invariant representation
12
learning invariant
8
representation learning
8
learning
5
representation
5
benefits representation
4
representation regularization
4
regularization invariance
4
invariance based
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!