Generating adversarial examples without specifying a target model.

PeerJ Comput Sci

School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China.

Published: September 2021

Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459786PMC
http://dx.doi.org/10.7717/peerj-cs.702DOI Listing

Publication Analysis

Top Keywords

adversarial examples
12
target model
12
generating adversarial
8
data set
8
target
5
examples target
4
model adversarial
4
examples regarded
4
regarded security
4
security threat
4

Similar Publications

Confronting adversarial attacks and data imbalances, attaining adversarial robustness under long-tailed distribution presents a challenging problem. Adversarial training (AT) is a conventional solution for enhancing adversarial robustness, which generates adversarial examples (AEs) in a generation phase and subsequently trains on these AEs in a training phase. Existing long-tailed adversarial learning methods follow the AT framework and rebalance the AE classification in the training phase.

View Article and Find Full Text PDF

Transferable adversarial examples, which are generated by transfer-based attacks, have strong adaptability for attacking a completely unfamiliar victim model without knowing its architecture, parameters and outputs. While current transfer-based attacks easily defeat surrogate model with minor perturbations, they struggle to transfer these perturbations to unfamiliar victim models. To characterize these untransferable adversarial examples, which consist of natural examples and perturbations, we define the concept of fuzzy domain.

View Article and Find Full Text PDF

Generative Artificial Intellegence (AI) in Pathology and Medicine: A Deeper Dive.

Mod Pathol

December 2024

Department of Pathology, University of Pittsburgh Medical Center, PA, USA; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, PA, USA. Electronic address:

This review article builds upon the introductory piece in our seven-part series, delving deeper into the transformative potential of generative artificial intelligence (Gen AI) in pathology and medicine. The article explores the applications of Gen AI models in pathology and medicine, including the use of custom chatbots for diagnostic report generation, synthetic image synthesis for training new models, dataset augmentation, hypothetical scenario generation for educational purposes, and the use of multimodal along with multi-agent models. This article also provides an overview of the common categories within generative AI models, discussing open-source and closed-source models, as well as specific examples of popular models such as GPT-4, Llama, Mistral, DALL-E, Stable Diffusion and their associated frameworks (e.

View Article and Find Full Text PDF

Machine learning is central to mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since humans or machines can change the predictions of programs entirely.

View Article and Find Full Text PDF

Although much is known about why people engage in collective action participation (e.g., politicized identity, group-based anger), little is known about the psychological consequences of such participation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!