Enhancing adversarial attacks with resize-invariant and logical ensemble.

Neural Netw

School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, 450001, China.

Published: May 2024

In black-box scenarios, most transfer-based attacks usually improve the transferability of adversarial examples by optimizing the gradient calculation of the input image. Unfortunately, since the gradient information is only calculated and optimized for each pixel point in the image individually, the generated adversarial examples tend to overfit the local model and have poor transferability to the target model. To tackle the issue, we propose a resize-invariant method (RIM) and a logical ensemble transformation method (LETM) to enhance the transferability of adversarial examples. Specifically, RIM is inspired by the resize-invariant property of Deep Neural Networks (DNNs). The range of resizable pixel is first divided into multiple intervals, and then the input image is randomly resized and padded within each interval. Finally, LETM performs logical ensemble of multiple images after RIM transformation to calculate the final gradient update direction. The proposed method adequately considers the information of each pixel in the image and the surrounding pixels. The probability of duplication of image transformations is minimized and the overfitting effect of adversarial examples is effectively mitigated. Numerous experiments on the ImageNet dataset show that our approach outperforms other advanced methods and is capable of generating more transferable adversarial examples.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106194DOI Listing

Publication Analysis

Top Keywords

adversarial examples
20
logical ensemble
12
transferability adversarial
8
input image
8
adversarial
5
examples
5
image
5
enhancing adversarial
4
adversarial attacks
4
attacks resize-invariant
4

Similar Publications

Deep learning enabled rapid classification of yeast species in food by imaging of yeast microcolonies.

Food Res Int

February 2025

Department of Food Science & Technology, University of California-Davis, Davis, CA 95616, USA; Department of Biological & Agricultural Engineering, University of California-Davis, Davis, CA 95616, USA. Electronic address:

Diverse species of yeasts are commonly associated with food and food production environments. The contamination of food products by spoilage yeasts poses significant challenges, leading to quality degradation and food loss. Similarly, the introduction of undesirable strains during fermentation can cause considerable challenges with the quality and progress of the fermentation process.

View Article and Find Full Text PDF

Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.

View Article and Find Full Text PDF

This dataset is generated from real-time simulations conducted in MATLAB/Simscape, focusing on the impact of smart noise signals on battery energy storage systems (BESS). Using Deep Reinforcement Learning (DRL) agent known as Proximal Policy Optimization (PPO), noise signals in the form of subtle millivolt and milliampere variations are strategically created to represent realistic cases of False Data Injection Attacks (FDIA). These signals are designed to disrupt the State of Charge (SoC) and State of Health (SoH) estimation blocks within Unscented Kalman Filters (UKF).

View Article and Find Full Text PDF

Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.

View Article and Find Full Text PDF

Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.

AJNR Am J Neuroradiol

January 2025

From the Department of Radiology (A.T.T., D.Z., D.K., S. Payabvash) and Neurology (S. Park), NewYork-Presbyterian/Columbia University Irving Medical Center, Columbia University, New York, NY; Department of Radiology and Biomedical Imaging (G.A., A.M.) and Neurology (G.J.F., K.N.S.), Yale School of Medicine, New Haven, CT; Zeenat Qureshi Stroke Institute and Department of Neurology (A.I.Q.), University of Missouri, Columbia, MO; Department of Neurosurgery (S.M.), Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, New York, NY; and Department of Neurology (S.B.M.), Weill Cornell Medical College, Cornell University, New York, NY.

Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!