Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Highly realistic imaging and video synthesis have become possible and relatively simple tasks with the rapid growth of generative adversarial networks (GANs). GAN-related applications, such as DeepFake image and video manipulation and adversarial attacks, have been used to disrupt and confound the truth in images and videos over social media. DeepFake technology aims to synthesize high visual quality image content that can mislead the human vision system, while the adversarial perturbation attempts to mislead the deep neural networks to a wrong prediction. Defense strategy becomes difficult when adversarial perturbation and DeepFake are combined. This study examined a novel deceptive mechanism based on statistical hypothesis testing against DeepFake manipulation and adversarial attacks. First, a deceptive model based on two isolated sub-networks was designed to generate two-dimensional random variables with a specific distribution for detecting the DeepFake image and video. This research proposes a maximum likelihood loss for training the deceptive model with two isolated sub-networks. Afterward, a novel hypothesis was proposed for a testing scheme to detect the DeepFake video and images with a well-trained deceptive model. The comprehensive experiments demonstrated that the proposed decoy mechanism could be generalized to compressed and unseen manipulation methods for both DeepFake and attack detection.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2023.3253390 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!