In recent years, the research on transferable feature-level adversarial attack has become a hot spot due to attacking unknown deep neural networks successfully. But the following problems limit its transferability. Existing feature disruption methods often focus on computing feature weights precisely, while overlooking the noise influence of feature maps, which results in disturbing non-critical features. Meanwhile, geometric augmentation algorithms are used to enhance image diversity but compromise information integrity, which hamper models from capturing comprehensive features. Furthermore, current feature perturbation could not pay attention to the density distribution of object-relevant key features, which mainly concentrate in salient region and fewer in the most distributed background region, and get limited transferability. To tackle these challenges, a feature distribution-aware transferable adversarial attack method, called FDAA, is proposed to implement distinct strategies for different image regions in the paper. A novel Aggregated Feature Map Attack (AFMA) is presented to significantly denoise feature maps, and an input transformation strategy, called Smixup, is introduced to help feature disruption algorithms to capture comprehensive features. Extensive experiments demonstrate that scheme proposed achieves better transferability with an average success rate of 78.6% on adversarially trained models.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106467 | DOI Listing |
J Imaging
January 2025
Science and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, Russia.
Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity.
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.
View Article and Find Full Text PDFNeural Netw
January 2025
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China. Electronic address:
Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks.
View Article and Find Full Text PDFSci Rep
January 2025
Computer Science Department, Faculty of Computers and Information, South Valley University, Qena, 83523, Egypt.
Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, Jharkhand, 835215, India.
This research introduces a novel hybrid cryptographic framework that combines traditional cryptographic protocols with advanced methodologies, specifically Wasserstein Generative Adversarial Networks with Gradient Penalty (WGAN-GP) and Genetic Algorithms (GA). We evaluated several cryptographic protocols, including AES-ECB, AES-GCM, ChaCha20, RSA, and ECC, against critical metrics such as security level, efficiency, side-channel resistance, and cryptanalysis resistance. Our findings demonstrate that this integrated approach significantly enhances both security and efficiency across all evaluated protocols.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!