AI Article Synopsis

  • Abdominal ultrasonography (AUS) is a common, safe, and affordable method for screening abdominal diseases, but it struggles with detecting pancreatic issues effectively.
  • This study tested the effects of posture changes and using a liquid-filled stomach on the visualization of the pancreas during AUS with fusion imaging, using MRI data from 20 healthy volunteers.
  • Results showed that visualization improved from 55% to 90% using these techniques, highlighting that gastrointestinal gas is a major barrier to clear imaging of the pancreas.

Article Abstract

Objective Abdominal ultrasonography (AUS) is used to screen for abdominal diseases owing to its low cost, safety, and accessibility. However, the detection rate of pancreatic disease using AUS is unsatisfactory. We evaluated the visualization area of the pancreas and the efficacy of manipulation techniques for AUS with fusion imaging. Methods Magnetic resonance imaging (MRI) volume data were obtained from 20 healthy volunteers in supine and right lateral positions. The MRI volume data were transferred to an ultrasound machine equipped with a fusion imaging software program. We evaluated the visualization area of the pancreas before and after postural changes using AUS with fusion imaging and assessed the liquid-filled stomach method using 500 ml of de-aerated water in 10 randomly selected volunteers. Patients This study included 20 healthy volunteers (19 men and 1 woman) with a mean age of 33.0 (21-37.5) years old. Results Fusion imaging revealed that the visualization area of the entire pancreas using AUS was 55%, which significantly improved to 75% with a postural change and 90% when using the liquid-filled stomach method (p=0.043). Gastrointestinal gas is the main obstacle for visualization of the pancreas. Conclusion Fusion imaging objectively demonstrated that manipulation techniques can improve pancreatic visualization.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11557206PMC
http://dx.doi.org/10.2169/internalmedicine.2822-23DOI Listing

Publication Analysis

Top Keywords

fusion imaging
24
manipulation techniques
12
visualization area
12
imaging objectively
8
evaluated visualization
8
area pancreas
8
aus fusion
8
mri volume
8
volume data
8
healthy volunteers
8

Similar Publications

A Feature-Enhanced Small Object Detection Algorithm Based on Attention Mechanism.

Sensors (Basel)

January 2025

School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.

With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.

View Article and Find Full Text PDF

Cross-Modal Collaboration and Robust Feature Classifier for Open-Vocabulary 3D Object Detection.

Sensors (Basel)

January 2025

The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.

The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.

View Article and Find Full Text PDF

Improving Industrial Quality Control: A Transfer Learning Approach to Surface Defect Detection.

Sensors (Basel)

January 2025

Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.

To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.

View Article and Find Full Text PDF

Segment Any Leaf 3D: A Zero-Shot 3D Leaf Instance Segmentation Method Based on Multi-View Images.

Sensors (Basel)

January 2025

School of Electronic and Communication Engineering, Sun Yat-sen University, Shenzhen 518000, China.

Exploring the relationships between plant phenotypes and genetic information requires advanced phenotypic analysis techniques for precise characterization. However, the diversity and variability of plant morphology challenge existing methods, which often fail to generalize across species and require extensive annotated data, especially for 3D datasets. This paper proposes a zero-shot 3D leaf instance segmentation method using RGB sensors.

View Article and Find Full Text PDF

Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model's ability to extract discriminative features from non-occluded areas.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!