Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Facial pose variation is one of the major factors making face recognition (FR) a challenging task. One popular solution is to convert non-frontal faces to frontal ones on which FR is performed. Rotating faces causes facial pixel value changes. Therefore, existing CNN-based methods learn to synthesize frontal faces in color space. However, this learning problem in a color space is highly non-linear, causing the synthetic frontal faces to lose fine facial textures. In this paper, we take the view that the nonfrontal-frontal pixel changes are essentially caused by geometric transformations (rotation, translation, and so on) in space. Therefore, we aim to learn the nonfrontal-frontal facial conversion in the spatial domain rather than the color domain to ease the learning task. To this end, we propose an appearance-flow-based face frontalization convolutional neural network (A3F-CNN). Specifically, A3F-CNN learns to establish the dense correspondence between the non-frontal and frontal faces. Once the correspondence is built, frontal faces are synthesized by explicitly "moving" pixels from the non-frontal one. In this way, the synthetic frontal faces can preserve fine facial textures. To improve the convergence of training, an appearance-flow-guided learning strategy is proposed. In addition, generative adversarial network loss is applied to achieve a more photorealistic face, and a face mirroring method is introduced to handle the self-occlusion problem. Extensive experiments are conducted on face synthesis and pose invariant FR. Results show that our method can synthesize more photorealistic faces than the existing methods in both the controlled and uncontrolled lighting environments. Moreover, we achieve a very competitive FR performance on the Multi-PIE, LFW and IJB-A databases.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2018.2883554 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!