Uncompressed clinical data from modern positron emission tomography (PET) scanners are very large, exceeding 350 million data points (projection bins). The last decades have seen tremendous advancements in mathematical imaging tools many of which lead to non-smooth (i.e. non-differentiable) optimization problems which are much harder to solve than smooth optimization problems. Most of these tools have not been translated to clinical PET data, as the state-of-the-art algorithms for non-smooth problems do not scale well to large data. In this work, inspired by big data machine learning applications, we use advanced randomized optimization algorithms to solve the PET reconstruction problem for a very large class of non-smooth priors which includes for example total variation, total generalized variation, directional total variation and various different physical constraints. The proposed algorithm randomly uses subsets of the data and only updates the variables associated with these. While this idea often leads to divergent algorithms, we show that the proposed algorithm does indeed converge for any proper subset selection. Numerically, we show on real PET data (FDG and florbetapir) from a Siemens Biograph mMR that about ten projections and backprojections are sufficient to solve the MAP optimisation problem related to many popular non-smooth priors; thus showing that the proposed algorithm is fast enough to bring these models into routine clinical practice.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1088/1361-6560/ab3d07 | DOI Listing |
Psychometrika
March 2024
Department of Statistics, London School of Economics and Political Science, Columbia House, Room 5.16 Houghton Street, London, WC2A 2AE, UK.
Ensuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level.
View Article and Find Full Text PDFChaos
June 2023
Department of Applied Probability and Statistics, School of Mathematics and Statistics, Northwestern Polytechnical University, Xi'an 710129, Peoples Republic of China.
This paper designs an algorithm to distill the piecewise non-linear dynamical system from the data without prior knowledge. The system to be identified does not have to be written as a known model term or be thoroughly understood. We exploit the fact that an unknown piecewise non-linear system can be decomposed into the Fourier series as long as its equations of motion are Riemann integrable.
View Article and Find Full Text PDFPhys Med Biol
July 2022
Institute of Mathematics and Scientific Computing, University of Graz, Austria.
Complete time of flight (TOF) sinograms of state-of-the-art TOF PET scanners have a large memory footprint. Currently, they contain ∼4 · 10data bins which amount to ∼17 GB in 32 bit floating point precision. Moreover, their size will continue to increase with advances in the achievable detector TOF resolution and increases in the axial field of view.
View Article and Find Full Text PDFIEEE J Biomed Health Inform
August 2022
Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects.
View Article and Find Full Text PDFMagn Reson Med
April 2022
Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany.
Purpose: To develop an algorithm for robust partial Fourier (PF) reconstruction applicable to diffusion-weighted (DW) images with non-smooth phase variations.
Methods: Based on an unrolled proximal splitting algorithm, a neural network architecture is derived, which alternates between data consistency operations and regularization implemented by recurrent convolutions. In order to exploit correlations, multiple repetitions of the same slice are jointly reconstructed under consideration of permutation-equivariance.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!