Objective Abdominal ultrasonography (AUS) is used to screen for abdominal diseases owing to its low cost, safety, and accessibility. However, the detection rate of pancreatic disease using AUS is unsatisfactory. We evaluated the visualization area of the pancreas and the efficacy of manipulation techniques for AUS with fusion imaging. Methods Magnetic resonance imaging (MRI) volume data were obtained from 20 healthy volunteers in supine and right lateral positions. The MRI volume data were transferred to an ultrasound machine equipped with a fusion imaging software program. We evaluated the visualization area of the pancreas before and after postural changes using AUS with fusion imaging and assessed the liquid-filled stomach method using 500 ml of de-aerated water in 10 randomly selected volunteers. Patients This study included 20 healthy volunteers (19 men and 1 woman) with a mean age of 33.0 (21-37.5) years old. Results Fusion imaging revealed that the visualization area of the entire pancreas using AUS was 55%, which significantly improved to 75% with a postural change and 90% when using the liquid-filled stomach method (p=0.043). Gastrointestinal gas is the main obstacle for visualization of the pancreas. Conclusion Fusion imaging objectively demonstrated that manipulation techniques can improve pancreatic visualization.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11557206 | PMC |
http://dx.doi.org/10.2169/internalmedicine.2822-23 | DOI Listing |
Sensors (Basel)
January 2025
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Electronic and Communication Engineering, Sun Yat-sen University, Shenzhen 518000, China.
Exploring the relationships between plant phenotypes and genetic information requires advanced phenotypic analysis techniques for precise characterization. However, the diversity and variability of plant morphology challenge existing methods, which often fail to generalize across species and require extensive annotated data, especially for 3D datasets. This paper proposes a zero-shot 3D leaf instance segmentation method using RGB sensors.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model's ability to extract discriminative features from non-occluded areas.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!