Purpose: To investigate the feasibility and value of three-dimensional ultrasound/contrast-enhanced ultrasound (3D US-CEUS) fusion imaging for the immediate evaluation of technical success and the guidance of supplementary ablation during the liver cancer thermal ablation procedure.

Materials And Methods: Patients diagnosed with malignant liver cancer intending to receive thermal ablation including radiofrequency ablation (RFA) or microwave ablation (MWA) were enrolled. 3D US-CEUS fusion imaging was used to immediately assess the technical success and guide supplementary ablation. Contrast-enhanced computed tomography/magnetic resonance imaging (CECT/CEMRI) was performed one month after ablation to assess the technique effectiveness of the ablation. The registration success rate, duration time of 3D US-CEUS fusion imaging, technique effectiveness rate and major complications were recorded.

Results: In total, 76 patients with 95 tumours who underwent RFA or MWA and assessed by 3D US-CEUS fusion imaging were enrolled. The registration success rate of 3D US-CEUS fusion imaging was 93.7% (89/95), and the duration time was 4.0 ± 1.1 min. Thirty lesions received supplementary ablation immediately during the procedure. The technique effectiveness rate of the ablation was 98.8% (81/82). There were no major complications related to ablation.

Conclusions: 3D US-CEUS fusion imaging is a feasible and valuable technique for the immediate evaluation and guidance of supplementary ablation during the liver cancer thermal ablation procedure.

Download full-text PDF

Source
http://dx.doi.org/10.1080/02656736.2017.1373306DOI Listing

Publication Analysis

Top Keywords

fusion imaging
28
us-ceus fusion
24
liver cancer
16
thermal ablation
16
supplementary ablation
16
ablation
13
cancer thermal
12
technique effectiveness
12
evaluation guidance
8
three-dimensional ultrasound/contrast-enhanced
8

Similar Publications

SEPO-FI: Deep-learning based software to calculate fusion index of muscle cells.

Comput Biol Med

January 2025

School of Computer Science, Chungbuk National University, Cheongju 28644, Republic of Korea. Electronic address:

The fusion index is a critical metric for quantitatively assessing the transformation of in vitro muscle cells into myotubes in the biological and medical fields. Traditional methods for calculating this index manually involve the labor-intensive counting of numerous muscle cell nuclei in images, which necessitates determining whether each nucleus is located inside or outside the myotubes, leading to significant inter-observer variation. To address these challenges, this study proposes a three-stage process that integrates the strengths of pattern recognition and deep-learning to automatically calculate the fusion index.

View Article and Find Full Text PDF

Objectives: This study aimed to investigate the accuracy of multiparametric magnetic resonance imaging (mpMRI), genetic urinary test (GUT), and prostate cancer prevention trial risk calculator version 2.0 (PCPTRC2) for the clinically significant prostate cancer (csPCa) diagnostic in biopsy-naïve patients.

Materials And Methods: In a single center study between 2021 and 2024 participants underwent prostate mpMRI, GUT, and ultrasound (US) guided biopsy.

View Article and Find Full Text PDF

A Feature-Enhanced Small Object Detection Algorithm Based on Attention Mechanism.

Sensors (Basel)

January 2025

School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.

With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.

View Article and Find Full Text PDF

Cross-Modal Collaboration and Robust Feature Classifier for Open-Vocabulary 3D Object Detection.

Sensors (Basel)

January 2025

The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.

The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.

View Article and Find Full Text PDF

Improving Industrial Quality Control: A Transfer Learning Approach to Surface Defect Detection.

Sensors (Basel)

January 2025

Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.

To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!