Dose prediction is a crucial step in automated radiotherapy planning for liver cancer. Several deep learning-based approaches for dose prediction have been proposed to enhance the design efficiency and quality of radiotherapy plan. However, these approaches usually take CT images and contours of organs at risk (OARs) and planning target volume (PTV) as a multi-channel input and is thus difficult to extract sufficient feature information from each input, which results in unsatisfactory dose distribution. In this paper, we propose a novel dose prediction network for liver cancer based on hierarchical feature fusion and interactive attention. A feature extraction module is first constructed to extract multi-scale features from different inputs, and a hierarchical feature fusion module is then built to fuse these multi-scale features hierarchically. A decoder based on attention mechanism is designed to gradually reconstruct the fused features into dose distribution. Additionally, we design an autoencoder network to generate a perceptual loss during training stage, which is used to improve the accuracy of dose prediction. The proposed method is tested on private clinical dataset and obtains HI and CI of 0.31 and 0.87, respectively. The experimental results are better than those by several existing methods, indicating that the dose distribution generated by the proposed method is close to that approved in clinics. The codes are available at https://github.com/hired-ld/FA-Net.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.artmed.2024.102961 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!