Purpose: We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging.
Methods: Clinical dynamic F-DOPA brain PET/CT studies of 46 subjects with ten folds cross-validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25-90 minutes) from the initial 13 frames (0-25 minutes). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics.
Results: The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time-varying tracer biodistribution. The Bland-Altman plots reported the lowest tracer uptake bias (-0.04) for the putamen region and the smallest variance (95% CI: -0.38, +0.14) for the cerebellum. The region-wise Patlak graphical analysis in the caudate and putamen regions for eight subjects from the test and validation dataset showed that the average bias for and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (P-value <0.05), respectively.
Conclusion: We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 minutes time frames from the initial 25 minutes frames, thus enabling significant reduction in scanning time.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8518550 | PMC |
http://dx.doi.org/10.1002/mp.15063 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!