The laparoscopic approach has gained acceptance in the field of hepatopancreaticobiliary surgery. It offers several advantages including reduced blood loss, reduced postoperative pain, and shorter length of stay. However, long operating times can be associated with surgeon and assistant fatigue and image tremor. Robotic camera holders have been designed to overcome these drawbacks but may come with significant costs. The aim of this study was to economically evaluate their use compared with standard assistants using a single surgeon consecutive series of laparoscopic liver resections from January 2014 to May 2015. Only use of nurse assistants with no advanced training and postgraduate year 2 doctors were cheaper than utilization of the device. We suggest the use of a robotic camera holder is cost-beneficial and may have wider service and educational benefits.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/SLE.0000000000000452 | DOI Listing |
Sensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Key Laboratory of Modern Agricultural Equipment, Ministry of Agriculture and Rural Affairs, Nanjing Institute of Agricultural Mechanization, Nanjing 210014, China.
To address several challenges, including low efficiency, significant damage, and high costs, associated with the manual harvesting of , in this study, a machine vision-based intelligent harvesting device was designed according to its agronomic characteristics and morphological features. This device mainly comprised a frame, camera, truss-type robotic arm, flexible manipulator, and control system. The FES-YOLOv5s deep learning target detection model was used to accurately identify and locate .
View Article and Find Full Text PDFSensors (Basel)
January 2025
Centre for Automation and Robotics (CAR UPM-CSIC), Escuela Técnica Superior de Ingeniería y Diseño Industrial (ETSIDI), Universidad Politécnica de Madrid, Ronda de Valencia 3, 28012 Madrid, Spain.
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human gait analysis systems. This paper presents a comprehensive review of the advancements and recent findings in the field of vision-based human gait analysis systems over the past five years, with a special emphasis on the role of vision sensors, machine learning algorithms, and technological innovations.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Mechanical and Intelligent Systems Engineering, The University of Electro-Communications, Tokyo 1828585, Japan.
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a multi-fingered robotic hand with proximity sensors is developed. To achieve self-contained autonomous navigation to a targeted object, onboard tracking and depth cameras are used to detect the targeted object and to control the UAV to reach the target object, even in a Global Positioning System-denied environment.
View Article and Find Full Text PDFSensors (Basel)
January 2025
College of Metrology Measurement and Instrument, China Jiliang University, Hangzhou 310018, China.
This paper aims to address the challenge of precise robotic grasping of molecular sieve drying bags during automated packaging by proposing a six-dimensional (6D) pose estimation method based on an red green blue-depth (RGB-D) camera. The method consists of three components: point cloud pre-segmentation, target extraction, and pose estimation. A minimum bounding box-based pre-segmentation method was designed to minimize the impact of packaging wrinkles and skirt curling.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!