Background: Vascular diseases are often treated minimally invasively. The interventional material (stents, guidewires, etc.) used during such percutaneous interventions are visualized by some form of image guidance. Today, this image guidance is usually provided by 2D X-ray fluoroscopy, that is, a live 2D image. 3D X-ray fluoroscopy, that is, a live 3D image, could accelerate existing and enable new interventions. However, existing algorithms for the 3D reconstruction of interventional material require either too many X-ray projections and therefore dose, or are only capable of reconstructing single, curvilinear structures.
Purpose: Using only two new X-ray projections per 3D reconstruction, we aim to reconstruct more complex arrangements of interventional material than was previously possible.
Methods: This is achieved by improving a previously presented deep learning-based reconstruction pipeline, which assumes that the X-ray images are acquired by a continuously rotating biplane system, in two ways: (a) separation of the reconstruction of different object types, (b) motion compensation using spatial transformer networks.
Results: Our pipeline achieves submillimeter accuracy on measured data of a stent and two guidewires inside an anthropomorphic phantom with respiratory motion. In an ablation study, we find that the aforementioned algorithmic changes improve our two figures of merit by 75 % (1.76 mm → 0.44 mm) and 59 % (1.15 mm → 0.47 mm) respectively. A comparison of our measured dose area product (DAP) rate to DAP rates of 2D fluoroscopy indicates a roughly similar dose burden.
Conclusions: This dose efficiency combined with the ability to reconstruct complex arrangements of interventional material makes the presented algorithm a promising candidate to enable 3D fluoroscopy.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/mp.16612 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!