A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

Ultramicroscopy

eMERG, Fisika Aplikatua I Saila, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno "Pitxitxi" Pasealekua 2, 48013 Bilbao, Spain.

Published: February 2017

The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ultramic.2016.10.013DOI Listing

Publication Analysis

Top Keywords

iteration number
8
number sirt
8
electron tomography
8
methodology
4
methodology finding
4
finding optimal
4
optimal iteration
4
sirt
4
sirt algorithm
4
algorithm quantitative
4

Similar Publications

Humanitarian medical response to natural and human-made disasters can be complicated by high clinician, staff, and patient turnover. While electronic medical records are being scaled up globally, their use remains limited in humanitarian response settings. The Fast Electronic Medical Record (fEMR) system is an open-source electronic health record system specifically designed for use in resource-limited settings and humanitarian crises.

View Article and Find Full Text PDF

In Self-Consistent Field (SCF) calculations, the choice of initial guess plays a key role in determining the time-to-solution by influencing the number of iterations required for convergence. However, focusing solely on reducing iterations may overlook the computational cost associated with improving the accuracy of initial guesses. This study critically evaluates the effectiveness of two initial guess methods─basis set projection (BSP) and many-body expansion (MBE) on Hartree-Fock and hybrid Density Functional Theory (B3LYP and MN15) methods.

View Article and Find Full Text PDF

Background: With the widespread use of lumbar pedicle screws for internal fixation, the morphology of the screws and the surrounding tissues should be evaluated. The metal artifact reduction (MAR) technique can reduce the artifacts caused by pedicle screws, improve the quality of computed tomography (CT) images after pedicle fixation, and provide more imaging information to the clinic.

Purpose: To explore whether the MAR+ method, a projection-based algorithm for correcting metal artifacts through multiple iterative operations, can reduce metal artifacts and have an impact on the structure of the surrounding metal.

View Article and Find Full Text PDF

For the application scenario of multi-user, high-bandwidth laser communication in satellite internet, this paper proposes a spatiotemporal vector optimization algorithm to achieve high energy utilization in arbitrary multi-beam generation using a liquid crystal optical phased array antenna. The core components of this method involve optimizing phase offsets and power coefficients through iterative processes to achieve precise beam shaping and efficient energy distribution among multiple beams. This approach overcomes the single-link limitation of traditional laser terminals and resolves challenges such as low radiation efficiency and substantial power loss in multi-beam generation systems utilizing passive phased array antennas.

View Article and Find Full Text PDF

The neural networks offer iteration capability for low-density parity-check (LDPC) decoding with superior performance at transmission. However, to cope with increasing code length and rate, the complexity of the neural network increases significantly. This is due to the large amount of feature extraction required to maintain the error correction capability.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!