A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 143

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 143
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 209
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3098
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: Attempt to read property "Count" on bool

Filename: helpers/my_audit_helper.php

Line Number: 3100

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3100
Function: _error_handler

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Clustering Approach for Detecting Multiple Types of Adversarial Examples. | LitMetric

Clustering Approach for Detecting Multiple Types of Adversarial Examples.

Sensors (Basel)

School of Computer Science and Engineering, Pusan National University, Busan 46241, Korea.

Published: May 2022

AI Article Synopsis

  • Researchers explore how adversarial examples can trick deep learning models by intentionally altering input features, prompting the need for effective defense methods.
  • Current defense strategies are mainly divided into three types: model retraining, input transformation, and adversarial example detection, with the latter being the focus due to its ability to avoid misclassifying legitimate data.
  • The paper proposes an improved adversarial example detection method that not only identifies legitimate and adversarial inputs but also categorizes various types of adversarial examples, showing better accuracy compared to existing methods.

Article Abstract

With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model. As an adversarial example has recently been considered in the most severe problem of deep learning technology, its defense methods have been actively studied. Such effective defense methods against adversarial examples are categorized into one of the three architectures: (1) model retraining architecture; (2) input transformation architecture; and (3) adversarial example detection architecture. Especially, defense methods using adversarial example detection architecture have been actively studied. This is because defense methods using adversarial example detection architecture do not make wrong decisions for the legitimate input data while others do. In this paper, we note that current defense methods using adversarial example detection architecture can classify the input data into only either a legitimate one or an adversarial one. That is, the current defense methods using adversarial example detection architecture can only detect the adversarial examples and cannot classify the input data into multiple classes of data, i.e., legitimate input data and various types of adversarial examples. To classify the input data into multiple classes of data while increasing the accuracy of the clustering model, we propose an advanced defense method using adversarial example detection architecture, which extracts the key features from the input data and feeds the extracted features into a clustering model. From the experimental results under various application datasets, we show that the proposed method can detect the adversarial examples while classifying the types of adversarial examples. We also show that the accuracy of the proposed method outperforms the accuracy of recent defense methods using adversarial example detection architecture.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9146128PMC
http://dx.doi.org/10.3390/s22103826DOI Listing

Publication Analysis

Top Keywords

adversarial example
36
defense methods
28
example detection
28
detection architecture
28
adversarial examples
24
methods adversarial
24
input data
24
adversarial
16
types adversarial
12
deep learning
12

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!

A PHP Error was encountered

Severity: Notice

Message: fwrite(): Write of 34 bytes failed with errno=28 No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 272

Backtrace:

A PHP Error was encountered

Severity: Warning

Message: session_write_close(): Failed to write session data using user defined save handler. (session.save_path: /var/lib/php/sessions)

Filename: Unknown

Line Number: 0

Backtrace: