Deep neural networks are fragile under adversarial attacks. In this work, we propose to develop a new defense method based on image restoration to remove adversarial attack noise. Using the gradient information back-propagated over the network to the input image, we identify high-sensitivity keypoints which have significant contributions to the image classification performance. We then partition the image pixels into the two groups: high-sensitivity and low-sensitivity points. For low-sensitivity pixels, we use a total variation (TV) norm-based image smoothing method to remove adversarial attack noise. For those high-sensitivity keypoints, we develop a structure-preserving low-rank image completion method. Based on matrix analysis and optimization, we derive an iterative solution for this optimization problem. Our extensive experimental results on the CIFAR-10, SVHN, and Tiny-ImageNet datasets have demonstrated that our method significantly outperforms other defense methods which are based on image de-noising or restoration, especially under powerful adversarial attacks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2021.3086596 | DOI Listing |
J Imaging
January 2025
Science and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, Russia.
Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity.
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.
View Article and Find Full Text PDFNeural Netw
January 2025
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China. Electronic address:
Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks.
View Article and Find Full Text PDFSci Rep
January 2025
Computer Science Department, Faculty of Computers and Information, South Valley University, Qena, 83523, Egypt.
Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, Jharkhand, 835215, India.
This research introduces a novel hybrid cryptographic framework that combines traditional cryptographic protocols with advanced methodologies, specifically Wasserstein Generative Adversarial Networks with Gradient Penalty (WGAN-GP) and Genetic Algorithms (GA). We evaluated several cryptographic protocols, including AES-ECB, AES-GCM, ChaCha20, RSA, and ECC, against critical metrics such as security level, efficiency, side-channel resistance, and cryptanalysis resistance. Our findings demonstrate that this integrated approach significantly enhances both security and efficiency across all evaluated protocols.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!