Objectives: To assess the impact of fusion imaging guidance on fluoroscopy duration and volume of contrast agent used for pulmonary artery embolization.

Methods: Thirty-four consecutive patients who underwent pulmonary artery embolization for pulmonary arterio-venous malformation ( = 28) or hemoptysis ( = 6) were retrospectively included. In the experimental group ( = 15), patients were treated using fusion imaging with 2D/3D registration. In the control group ( = 19), no fusion imaging has been used. Fluoroscopy duration and amount of contrast used were measured and intergroup comparison was performed.

Results: The average volume of contrast agent used for embolization in the fusion group (118.3 ml) was significantly lower than in the control group (285.3 ml) ( < 0.002). The mean fluoroscopy duration was not significantly different between both groups (19.5 min in the fusion group 31.4 min in the control group ( = 0.10)). No significant difference was observed regarding the average X-ray exposure (Air Kerma) ( = 0.68 in the univariate analysis). Technical success rate was 100% for both groups.

Conclusion: Fusion imaging significantly reduces contrast medium volumes needed to perform pulmonary artery embolization. The fluoroscopy duration and the X-ray exposure did not vary significantly.

Advances In Knowledge: CTA-based fusion imaging using 2D-3D registration is a valuable tool for performing pulmonary artery embolization, helpful for planning and guiding catheterization.Compared to the traditional imaging guidance, fusion imaging reduces the volume of contrast agent used.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10607406PMC
http://dx.doi.org/10.1259/bjr.20220815DOI Listing

Publication Analysis

Top Keywords

fusion imaging
28
pulmonary artery
20
fluoroscopy duration
20
artery embolization
16
contrast agent
16
volume contrast
12
control group
12
fusion
9
imaging guidance
8
fusion group
8

Similar Publications

A Feature-Enhanced Small Object Detection Algorithm Based on Attention Mechanism.

Sensors (Basel)

January 2025

School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.

With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.

View Article and Find Full Text PDF

Cross-Modal Collaboration and Robust Feature Classifier for Open-Vocabulary 3D Object Detection.

Sensors (Basel)

January 2025

The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.

The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.

View Article and Find Full Text PDF

Improving Industrial Quality Control: A Transfer Learning Approach to Surface Defect Detection.

Sensors (Basel)

January 2025

Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.

To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.

View Article and Find Full Text PDF

Segment Any Leaf 3D: A Zero-Shot 3D Leaf Instance Segmentation Method Based on Multi-View Images.

Sensors (Basel)

January 2025

School of Electronic and Communication Engineering, Sun Yat-sen University, Shenzhen 518000, China.

Exploring the relationships between plant phenotypes and genetic information requires advanced phenotypic analysis techniques for precise characterization. However, the diversity and variability of plant morphology challenge existing methods, which often fail to generalize across species and require extensive annotated data, especially for 3D datasets. This paper proposes a zero-shot 3D leaf instance segmentation method using RGB sensors.

View Article and Find Full Text PDF

Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model's ability to extract discriminative features from non-occluded areas.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!