Deep neural networks (DNNs) are susceptible to adversarial examples, which are crafted by deliberately adding some human-imperceptible perturbations on original images. To explore the vulnerability of models of DNNs, transfer-based black-box attacks are attracting increasing attention of researchers credited to their high practicality. The transfer-based approaches can launch attacks against models easily in the black-box setting by resultant adversarial examples, whereas the success rates are not satisfactory.
View Article and Find Full Text PDFDeep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern.
View Article and Find Full Text PDF