Purpose: Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric, , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image.
Methods: The proposed was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance of was evaluated via metrics of correlation with ground truth and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration of on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity.
Results: The magnitude and spatial map of showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly, achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies, properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the local across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation using resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness.
Conclusion: The proposed , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11155121 | PMC |
http://dx.doi.org/10.1002/mp.17125 | DOI Listing |
ACS Appl Mater Interfaces
January 2025
College of Electronics and Information, Qingdao University, Qingdao 266071, China.
3D multifunctional wearable piezoresistive sensors have aroused extensive attention in the fields of motion detection, human-computer interaction, electronic skin, etc. However, current research mainly focuses on improving the foundational performance of piezoresistive sensors, while many advanced demands are often ignored. Herein, a 3D piezoresistive sensor based on rGO@C-ZIF-67@PU is fabricated via high temperature carbonization and a solvothermal reduction method.
View Article and Find Full Text PDFBrain Spine
December 2024
Laboratory of Biomechanics and Medical Imaging, Faculty of Medicine, Saint Joseph University of Beirut, Beirut, Lebanon.
Background: Adults with spinal deformity (ASD) are known to have spinal malalignment, which can impact their quality of life and their autonomy in daily life activities. Among these tasks, ascending and descending stairs is a common activity of daily life that might be affected.
Research Question: What are the main kinematic alterations in ASD during stair ascent and descent?
Methods: 112 primary ASD patients and 34 controls filled HRQoL questionnaires and underwent biplanar X-from which spino-pelvic radiographic parameters were calculated.
Med Image Anal
December 2024
Faculty of Biomedical Engineering, Technion, Haifa, Israel. Electronic address:
Quantitative analysis of pseudo-diffusion in diffusion-weighted magnetic resonance imaging (DWI) data shows potential for assessing fetal lung maturation and generating valuable imaging biomarkers. Yet, the clinical utility of DWI data is hindered by unavoidable fetal motion during acquisition. We present IVIM-morph, a self-supervised deep neural network model for motion-corrected quantitative analysis of DWI data using the Intra-voxel Incoherent Motion (IVIM) model.
View Article and Find Full Text PDFMed Phys
January 2025
National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
Background: Respiratory motion during radiotherapy (RT) may reduce the therapeutic effect and increase the dose received by organs at risk. This can be addressed by real-time tracking, where respiration motion prediction is currently required to compensate for system latency in RT systems. Notably, for the prediction of future images in image-guided adaptive RT systems, the use of deep learning has been considered.
View Article and Find Full Text PDFRadiographics
January 2025
From the Department of Radiology, Cardiovascular Imaging, Mayo Clinic, 200 1st St SW, Rochester, MN 559905 (P.S.R., P.A.A.); Department of Radiology, Division of Cardiothoracic Imaging, Jefferson University Hospitals, Philadelphia, Pa (B.S.); Department of Radiology, Baylor Health System, Dallas, Tex (P.R.); Department of Diagnostic Radiology, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR (M.Y.N.); and Department of Diagnostic Radiology, Cleveland Clinic, Cleveland, Ohio (M.A.B.).
Cardiac MRI (CMR) is an important imaging modality in the evaluation of cardiovascular diseases. CMR image acquisition is technically challenging, which in some circumstances is associated with artifacts, both general as well as sequence specific. Recognizing imaging artifacts, understanding their causes, and applying effective approaches for artifact mitigation are critical for successful CMR.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!