FDAA: A feature distribution-aware transferable adversarial attack method.

Neural Netw

School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, Guangdong, China.

Published: October 2024

In recent years, the research on transferable feature-level adversarial attack has become a hot spot due to attacking unknown deep neural networks successfully. But the following problems limit its transferability. Existing feature disruption methods often focus on computing feature weights precisely, while overlooking the noise influence of feature maps, which results in disturbing non-critical features. Meanwhile, geometric augmentation algorithms are used to enhance image diversity but compromise information integrity, which hamper models from capturing comprehensive features. Furthermore, current feature perturbation could not pay attention to the density distribution of object-relevant key features, which mainly concentrate in salient region and fewer in the most distributed background region, and get limited transferability. To tackle these challenges, a feature distribution-aware transferable adversarial attack method, called FDAA, is proposed to implement distinct strategies for different image regions in the paper. A novel Aggregated Feature Map Attack (AFMA) is presented to significantly denoise feature maps, and an input transformation strategy, called Smixup, is introduced to help feature disruption algorithms to capture comprehensive features. Extensive experiments demonstrate that scheme proposed achieves better transferability with an average success rate of 78.6% on adversarially trained models.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106467DOI Listing

Publication Analysis

Top Keywords

adversarial attack
12
feature distribution-aware
8
distribution-aware transferable
8
transferable adversarial
8
attack method
8
feature
8
feature disruption
8
feature maps
8
comprehensive features
8
fdaa feature
4

Similar Publications

Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity.

View Article and Find Full Text PDF

The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.

View Article and Find Full Text PDF

GAN-based data reconstruction attacks in split learning.

Neural Netw

January 2025

Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China. Electronic address:

Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks.

View Article and Find Full Text PDF

Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.

View Article and Find Full Text PDF

This research introduces a novel hybrid cryptographic framework that combines traditional cryptographic protocols with advanced methodologies, specifically Wasserstein Generative Adversarial Networks with Gradient Penalty (WGAN-GP) and Genetic Algorithms (GA). We evaluated several cryptographic protocols, including AES-ECB, AES-GCM, ChaCha20, RSA, and ECC, against critical metrics such as security level, efficiency, side-channel resistance, and cryptanalysis resistance. Our findings demonstrate that this integrated approach significantly enhances both security and efficiency across all evaluated protocols.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!