Recent studies in deep neural networks have shown that injecting random noise in the input layer of the networks contributes towards ℓ-norm-bounded adversarial perturbations. However, to defend against unrestricted adversarial examples, most of which are not ℓ-norm-bounded in the input layer, such input-layer random noise may not be sufficient. In the first part of this study, we generated a novel class of unrestricted adversarial examples termed feature-space adversarial examples. These examples are far from the original data in the input space but adjacent to the original data in a hidden-layer feature space and far again in the output layer. In the second part of this study, we empirically showed that while injecting random noise in the input layer was unable to defend these feature-space adversarial examples, they were defended by injecting random noise in the hidden layer. These results highlight the novel benefit of stochasticity in higher layers, in that it is useful for defending against these feature-space adversarial examples, a class of unrestricted adversarial examples.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2023.08.022 | DOI Listing |
Proc IEEE Int Symp Biomed Imaging
May 2024
Department of Electrical and Computer Engineering, Nashville, TN, USA.
Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels.
View Article and Find Full Text PDFFood Res Int
February 2025
Department of Food Science & Technology, University of California-Davis, Davis, CA 95616, USA; Department of Biological & Agricultural Engineering, University of California-Davis, Davis, CA 95616, USA. Electronic address:
Diverse species of yeasts are commonly associated with food and food production environments. The contamination of food products by spoilage yeasts poses significant challenges, leading to quality degradation and food loss. Similarly, the introduction of undesirable strains during fermentation can cause considerable challenges with the quality and progress of the fermentation process.
View Article and Find Full Text PDFSci Rep
January 2025
Computer Science Department, Faculty of Computers and Information, South Valley University, Qena, 83523, Egypt.
Adversarial attacks were commonly considered in computer vision (CV), but their effect on network security apps rests in the field of open investigation. As IoT, AI, and 5G endure to unite and understand the potential of Industry 4.0, security events and incidents on IoT systems have been enlarged.
View Article and Find Full Text PDFData Brief
February 2025
School of Engineering and Technology, University of New South Wales, Canberra, Australia.
This dataset is generated from real-time simulations conducted in MATLAB/Simscape, focusing on the impact of smart noise signals on battery energy storage systems (BESS). Using Deep Reinforcement Learning (DRL) agent known as Proximal Policy Optimization (PPO), noise signals in the form of subtle millivolt and milliampere variations are strategically created to represent realistic cases of False Data Injection Attacks (FDIA). These signals are designed to disrupt the State of Charge (SoC) and State of Health (SoH) estimation blocks within Unscented Kalman Filters (UKF).
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Computer Science, Hubei University of Technology, Wuhan 430068, China.
Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspective. We propose a multi-modal fine-tuning method called Multi-modal Depth Adversarial Prompt Tuning (MDAPT), which guides the generation of visual prompts through text prompts to improve the accuracy and performance of visual language models.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!