Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Magnetic resonance imaging (MRI) has become one of the most standardized and widely used neuroimaging protocols in the detection and diagnosis of neurodegenerative diseases. In clinical scenarios, multi-modality MR images can provide more comprehensive information than single modality images. However, high-quality multi-modality MR images can be difficult to obtain in the actual diagnostic process due to various uncertainties. Efficient methods of modality complement and synthesis have aroused increasing attention in the research community. In this article, style transfer is introduced into conditional generative adversarial networks (cGAN) architecture. A cGAN model with hierarchical feature mapping and fusion (ST-cGAN) is proposed to address the cross-modality synthesis of MR images. In order to surmount the sole focus on the pixel-wise similarity as most cGAN-based methods do, the proposed ST-cGAN takes advantage of the style information and applies it to the synthetic image's content structure. Taking images of two modalities as conditional input, ST-cGAN extracts different levels of style features and integrates them with the content features to form the style-enhanced synthetic image. Furthermore, the proposed model is made robust to random noise by adding noise input to the generator. A comprehensive analysis is performed by comparing the proposed ST-cGAN with other state-of-the-art baselines based on four representative evaluation metrics. The experimental results on the IXI (Information eXtraction from Images) dataset verify the validity of the ST-cGAN from different evaluation perspectives.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2022.105928 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!