Remix: Towards the transferability of adversarial examples.

Neural Netw

College of Information Science and Technology, Donghua University, 201620, Shanghai, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, 201620, Shanghai, China. Electronic address:

Published: June 2023

Deep neural networks (DNNs) are susceptible to adversarial examples, which are crafted by deliberately adding some human-imperceptible perturbations on original images. To explore the vulnerability of models of DNNs, transfer-based black-box attacks are attracting increasing attention of researchers credited to their high practicality. The transfer-based approaches can launch attacks against models easily in the black-box setting by resultant adversarial examples, whereas the success rates are not satisfactory. To boost the adversarial transferability, we propose a Remix method with multiple input transformations, which could achieve multiple data augmentation by utilizing gradients from previous iterations and images from other categories in the same iteration. Extensive experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset demonstrate that the proposed approach could drastically enhance the adversarial transferability and maintain similar success rates of white-box attacks on both undefended models and defended models. Furthermore, extended experiments based on LPIPS show that our method could maintain a similar perceived distance compared to other baselines.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.04.012DOI Listing

Publication Analysis

Top Keywords

adversarial examples
12
success rates
8
adversarial transferability
8
adversarial
6
remix transferability
4
transferability adversarial
4
examples deep
4
deep neural
4
neural networks
4
networks dnns
4

Similar Publications

MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK.

Proc IEEE Int Symp Biomed Imaging

May 2024

Department of Electrical and Computer Engineering, Nashville, TN, USA.

Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels.

View Article and Find Full Text PDF

Deep learning enabled rapid classification of yeast species in food by imaging of yeast microcolonies.

Food Res Int

February 2025

Department of Food Science & Technology, University of California-Davis, Davis, CA 95616, USA; Department of Biological & Agricultural Engineering, University of California-Davis, Davis, CA 95616, USA. Electronic address:

Diverse species of yeasts are commonly associated with food and food production environments. The contamination of food products by spoilage yeasts poses significant challenges, leading to quality degradation and food loss. Similarly, the introduction of undesirable strains during fermentation can cause considerable challenges with the quality and progress of the fermentation process.

View Article and Find Full Text PDF

Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.

View Article and Find Full Text PDF

This dataset is generated from real-time simulations conducted in MATLAB/Simscape, focusing on the impact of smart noise signals on battery energy storage systems (BESS). Using Deep Reinforcement Learning (DRL) agent known as Proximal Policy Optimization (PPO), noise signals in the form of subtle millivolt and milliampere variations are strategically created to represent realistic cases of False Data Injection Attacks (FDIA). These signals are designed to disrupt the State of Charge (SoC) and State of Health (SoH) estimation blocks within Unscented Kalman Filters (UKF).

View Article and Find Full Text PDF

Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!