Deep neural networks (DNNs) play key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from the original samples but can greatly change the output of DNNs. Recently, many white-box attack algorithms have been proposed, and most of the algorithms concentrate on how to make the best use of gradients per iteration to improve adversarial performance. In this article, we focus on the properties of the widely used activation function, rectified linear unit (ReLU), and find that there exist two phenomena (i.e., wrong blocking and over transmission) misguiding the calculation of gradients for ReLU during backpropagation. Both issues enlarge the difference between the predicted changes of the loss function from gradients and corresponding actual changes and misguide the optimized direction, which results in larger perturbations. Therefore, we propose a universal gradient correction adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient-based white-box attack algorithms such as fast gradient signed method (FGSM), iterative FGSM (I-FGSM), momentum I-FGSM (MI-FGSM), and variance tuning MI-FGSM (VMI-FGSM). Through backpropagation, our approach calculates the gradient of the loss function with respect to the network input, maps the values to scores, and selects a part of them to update the misguided gradients. Comprehensive experimental results on ImageNet and CIFAR10 demonstrate that our ADV-ReLU can be easily integrated into many state-of-the-art gradient-based white-box attack algorithms, as well as transferred to black-box attacks, to further decrease perturbations measured in the l -norm.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2023.3315414 | DOI Listing |
Entropy (Basel)
December 2024
School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100876, China.
Differential Computation Analysis (DCA) leverages memory traces to extract secret keys, bypassing countermeasures employed in white-box designs, such as encodings. Although researchers have made great efforts to enhance security against DCA, most solutions considerably decrease algorithmic efficiency. In our approach, the Feistel cipher SM4 is implemented by a series of table-lookup operations, and the input and output of each table are protected by affine transformations and nonlinear encodings generated randomly.
View Article and Find Full Text PDFPatterns (N Y)
December 2024
Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
To achieve adequate trust in patient-critical medical tasks, artificial intelligence must be able to recognize instances where they cannot operate confidently. Ensemble methods are deployed to estimate uncertainty, but models in an ensemble often share the same vulnerabilities to adversarial attacks. We propose an ensemble approach based on feature decorrelation and Fourier partitioning for teaching networks diverse features, reducing the chance of perturbation-based fooling.
View Article and Find Full Text PDFEntropy (Basel)
October 2024
The Third Faculty of Xi'an Research Institute of High Technology, Xi'an 710064, China.
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework.
View Article and Find Full Text PDFSensors (Basel)
November 2024
School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China.
Frequency-hopping (FH) communication adversarial research is a key area in modern electronic countermeasures. To address the challenge posed by interfering parties that use deep neural networks (DNNs) to classify and identify multiple intercepted FH signals-enabling targeted interference and degrading communication performance-this paper presents a batch feature point targetless adversarial sample generation method based on the Jacobi saliency map (BPNT-JSMA). This method builds on the traditional JSMA to generate feature saliency maps, selects the top 8% of salient feature points in batches for perturbation, and increases the perturbation limit to restrict the extreme values of single-point perturbations.
View Article and Find Full Text PDFJ Neural Eng
October 2024
Belt and Road Joint Laboratory on Measurement and Control Technology, Huazhong University of Science and Technology, Wuhan, People's Republic of China.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!