Boosting the transferability of adversarial examples via stochastic serial attack.

Neural Netw

College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China.

Published: June 2022

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern. In this scenario, attackers devise adversarial instances based on the local model without feedback information from the target one. Unfortunately, most existing transfer-based attack methods only employ a single local model to generate adversarial examples. It results in poor transferability because of overfitting to the local model. Although several ensemble attacks have been proposed, the transferability of adversarial examples merely have a slight increase. Meanwhile, these methods need high memory cost during the training process. To this end, we propose a novel attack strategy called stochastic serial attack (SSA). It adopts a serial strategy to attack local models, which reduces memory consumption compared to parallel attacks. Moreover, since local models are stochastically selected from a large model set, SSA can ensure that the adversarial examples do not overfit specific weaknesses of local source models. Extensive experiments on the ImageNet dataset and NeurIPS 2017 adversarial competition dataset show the effectiveness of SSA in improving the transferability of adversarial examples and reducing the memory consumption of the training process.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2022.02.025DOI Listing

Publication Analysis

Top Keywords

adversarial examples
28
transferability adversarial
12
local model
12
adversarial
9
stochastic serial
8
serial attack
8
training process
8
local models
8
memory consumption
8
examples
7

Similar Publications

This dataset is generated from real-time simulations conducted in MATLAB/Simscape, focusing on the impact of smart noise signals on battery energy storage systems (BESS). Using Deep Reinforcement Learning (DRL) agent known as Proximal Policy Optimization (PPO), noise signals in the form of subtle millivolt and milliampere variations are strategically created to represent realistic cases of False Data Injection Attacks (FDIA). These signals are designed to disrupt the State of Charge (SoC) and State of Health (SoH) estimation blocks within Unscented Kalman Filters (UKF).

View Article and Find Full Text PDF

Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.

View Article and Find Full Text PDF

Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.

AJNR Am J Neuroradiol

January 2025

From the Department of Radiology (A.T.T., D.Z., D.K., S. Payabvash) and Neurology (S. Park), NewYork-Presbyterian/Columbia University Irving Medical Center, Columbia University, New York, NY; Department of Radiology and Biomedical Imaging (G.A., A.M.) and Neurology (G.J.F., K.N.S.), Yale School of Medicine, New Haven, CT; Zeenat Qureshi Stroke Institute and Department of Neurology (A.I.Q.), University of Missouri, Columbia, MO; Department of Neurosurgery (S.M.), Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, New York, NY; and Department of Neurology (S.B.M.), Weill Cornell Medical College, Cornell University, New York, NY.

Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.

View Article and Find Full Text PDF

Cyber threat intelligence firms play a powerful role in producing knowledge, uncertainty, and ignorance about threats to organizations and governments globally. Drawing on historical and ethnographic methods, we show how cyber threat intelligence analysts navigate distinctive types of uncertainty as they transform digital traces into marketable products and services. We make two related contributions and arguments.

View Article and Find Full Text PDF

Adversarial training has become a primary method for enhancing the robustness of deep learning models. In recent years, fast adversarial training methods have gained widespread attention due to their lower computational cost. However, since fast adversarial training uses single-step adversarial attacks instead of multi-step attacks, the generated adversarial examples lack diversity, making models prone to catastrophic overfitting and loss of robustness.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!