Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (ood) detection of image inputs. However, these methods struggle to detect ood inputs that share nuisance values (e.g. background) with in-distribution inputs. The detection of shared-nuisance out-of-distribution (sn-ood) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment. In this work, we provide a possible explanation for sn-ood detection failures and propose ood detection to address them. Nuisance-aware ood detection substitutes a classifier trained via Empirical Risk Minimization (erm) and cross-entropy loss with one that 1. is trained under a distribution where the nuisance-label relationship is broken and 2. yields representations that are independent of the nuisance under this distribution, both marginally and conditioned on the label. We can train a classifier to achieve these objectives using Nuisance-Randomized Distillation (NURD), an algorithm developed for ood generalization under spurious correlations. Output- and feature-based nuisance-aware ood detection perform substantially better than their original counterparts, succeeding even when detection based on domain generalization algorithms fails to improve performance.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923583 | PMC |
http://dx.doi.org/10.1609/aaai.v37i12.26785 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!