A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting. | LitMetric

Over the past few years, convolutional neural networks (CNNs) have proved to reach superhuman performance in visual recognition tasks. However, CNNs can easily be fooled by adversarial examples (AEs), i.e., maliciously crafted images that force the networks to predict an incorrect output while being extremely similar to those for which a correct output is predicted. Regular AEs are not robust to input image transformations, which can then be used to detect whether an AE is presented to the network. Nevertheless, it is still possible to generate AEs that are robust to such transformations. This article extensively explores the detection of AEs via image transformations and proposes a novel methodology, called defense perturbation, to detect robust AEs with the same input transformations the AEs are robust to. Such a defense perturbation is shown to be an effective counter-measure to robust AEs. Furthermore, multinetwork AEs are introduced. This kind of AEs can be used to simultaneously fool multiple networks, which is critical in systems that use network redundancy, such as those based on architectures with majority voting over multiple CNNs. An extensive set of experiments based on state-of-the-art CNNs trained on the Imagenet dataset is finally reported.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2021.3105238DOI Listing

Publication Analysis

Top Keywords

aes robust
12
aes
9
adversarial examples
8
input transformations
8
image transformations
8
defense perturbation
8
robust aes
8
transformations
5
robust
5
detecting adversarial
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!