A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography. | LitMetric

Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography.

Quant Imaging Med Surg

Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China.

Published: April 2022

Background: Projection tomography (PT) is a very important and valuable method for fast volumetric imaging with isotropic spatial resolution. Sparse-view or limited-angle reconstruction-based PT can greatly reduce data acquisition time, lower radiation doses, and simplify sample fixation modes. However, few techniques can currently achieve image reconstruction based on few-view projection data, which is especially important for PT in living organisms.

Methods: A 2-stage deep learning network (TSDLN)-based framework was proposed for parallel-beam PT reconstructions using few-view projections. The framework is composed of a reconstruction network (R-net) and a correction network (C-net). The R-net is a generative adversarial network (GAN) used to complete image information with direct back-projection (BP) of a sparse signal, bringing the reconstructed image close to reconstruction results obtained from fully projected data. The C-net is a U-net array that denoises the compensation result to obtain a high-quality reconstructed image.

Results: The accuracy and feasibility of the proposed TSDLN-based framework in few-view projection-based reconstruction were first evaluated with simulations, using images from the DeepLesion public dataset. The framework exhibited better reconstruction performance than traditional analytic reconstruction algorithms and iterative algorithms, especially in cases using sparse-view projection images. For example, with as few as two projections, the TSDLN-based framework reconstructed high-quality images very close to the original image, with structural similarities greater than 0.8. By using previously acquired optical PT (OPT) data in the TSDLN-based framework trained on computed tomography (CT) data, we further exemplified the migration capabilities of the TSDLN-based framework. The results showed that when the number of projections was reduced to 5, the contours and distribution information of the samples in question could still be seen in the reconstructed images.

Conclusions: The simulations and experimental results showed that the TSDLN-based framework has strong reconstruction abilities using few-view projection images, and has great potential in the application of PT.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8923870PMC
http://dx.doi.org/10.21037/qims-21-778DOI Listing

Publication Analysis

Top Keywords

tsdln-based framework
24
deep learning
8
reconstruction
8
image reconstruction
8
projection tomography
8
few-view projection
8
framework
8
projection images
8
tsdln-based
6
few-view
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!