Autoencoders have recently been widely employed to approach the novelty detection problem. Trained only on the normal data, the AE is expected to reconstruct the normal data effectively while failing to regenerate the anomalous data. Based on this assumption, one could utilize the AE for novelty detection. However, it is known that this assumption does not always hold. Such an AE can often perfectly reconstruct the anomalous data due to modeling low-level and generic features in the input. We propose a novel training algorithm for the AE that facilitates learning more semantically meaningful features to address this problem. For this purpose, we exploit the fact that adversarial robustness promotes the learning of significant features. Therefore, we force the AE to learn such features by making its bottleneck layer more stable against adversarial perturbations. This idea is general and can be applied to other autoencoder-based approaches as well. We show that despite using a much simpler architecture than the prior methods, the proposed AE outperforms or is competitive to the state-of-the-art on four benchmark datasets and two medical datasets.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2021.09.014DOI Listing

Publication Analysis

Top Keywords

novelty detection
12
normal data
8
anomalous data
8
arae adversarially
4
adversarially robust
4
robust training
4
training autoencoders
4
autoencoders improves
4
improves novelty
4
detection autoencoders
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!