Whilst adversarial training has been proven to be one most effective defending method against adversarial attacks for deep neural networks, it suffers from over-fitting on training adversarial data and thus may not guarantee the robust generalization. This may result from the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way so that the resulting adversarial examples are highly biased towards the decision boundary, leading to an inhomogeneous data distribution. To mitigate this limitation, we propose to generate adversarial examples from a perturbation diversity perspective.
View Article and Find Full Text PDF