Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern. In this scenario, attackers devise adversarial instances based on the local model without feedback information from the target one. Unfortunately, most existing transfer-based attack methods only employ a single local model to generate adversarial examples. It results in poor transferability because of overfitting to the local model. Although several ensemble attacks have been proposed, the transferability of adversarial examples merely have a slight increase. Meanwhile, these methods need high memory cost during the training process. To this end, we propose a novel attack strategy called stochastic serial attack (SSA). It adopts a serial strategy to attack local models, which reduces memory consumption compared to parallel attacks. Moreover, since local models are stochastically selected from a large model set, SSA can ensure that the adversarial examples do not overfit specific weaknesses of local source models. Extensive experiments on the ImageNet dataset and NeurIPS 2017 adversarial competition dataset show the effectiveness of SSA in improving the transferability of adversarial examples and reducing the memory consumption of the training process.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2022.02.025 | DOI Listing |
Data Brief
February 2025
School of Engineering and Technology, University of New South Wales, Canberra, Australia.
This dataset is generated from real-time simulations conducted in MATLAB/Simscape, focusing on the impact of smart noise signals on battery energy storage systems (BESS). Using Deep Reinforcement Learning (DRL) agent known as Proximal Policy Optimization (PPO), noise signals in the form of subtle millivolt and milliampere variations are strategically created to represent realistic cases of False Data Injection Attacks (FDIA). These signals are designed to disrupt the State of Charge (SoC) and State of Health (SoH) estimation blocks within Unscented Kalman Filters (UKF).
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Computer Science, Hubei University of Technology, Wuhan 430068, China.
Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.
View Article and Find Full Text PDFAJNR Am J Neuroradiol
January 2025
From the Department of Radiology (A.T.T., D.Z., D.K., S. Payabvash) and Neurology (S. Park), NewYork-Presbyterian/Columbia University Irving Medical Center, Columbia University, New York, NY; Department of Radiology and Biomedical Imaging (G.A., A.M.) and Neurology (G.J.F., K.N.S.), Yale School of Medicine, New Haven, CT; Zeenat Qureshi Stroke Institute and Department of Neurology (A.I.Q.), University of Missouri, Columbia, MO; Department of Neurosurgery (S.M.), Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, New York, NY; and Department of Neurology (S.B.M.), Weill Cornell Medical College, Cornell University, New York, NY.
Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.
View Article and Find Full Text PDFSoc Stud Sci
January 2025
King's College London, London, UK.
Cyber threat intelligence firms play a powerful role in producing knowledge, uncertainty, and ignorance about threats to organizations and governments globally. Drawing on historical and ethnographic methods, we show how cyber threat intelligence analysts navigate distinctive types of uncertainty as they transform digital traces into marketable products and services. We make two related contributions and arguments.
View Article and Find Full Text PDFPLoS One
January 2025
School of Electrical Engineering, Zhejiang University, Hangzhou, China.
Adversarial training has become a primary method for enhancing the robustness of deep learning models. In recent years, fast adversarial training methods have gained widespread attention due to their lower computational cost. However, since fast adversarial training uses single-step adversarial attacks instead of multi-step attacks, the generated adversarial examples lack diversity, making models prone to catastrophic overfitting and loss of robustness.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!