While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI generated by SwinUNETR from SECT were closer to those of DECT-derived VMI than the contrast changes in Unet-generated sVMI. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522201 | PMC |
http://dx.doi.org/10.1007/s10278-024-01111-z | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!