Camera positioning is critical for all telerobotic surgical systems. Inadequate visualization of the remote site can lead to serious errors that can jeopardize the patient. An autonomous camera algorithm has been developed on a medical robot (da Vinci) simulator. It is found to be robust in key scenarios of operation. This system behaves with predictable and expected actions for the camera arm with respect to the tool positions. The implementation of this system is described herein. The simulation closely models the methodology needed to implement autonomous camera control in a real hardware system. The camera control algorithm follows three rules: (1) keep the view centered on the tools, (2) keep the zoom level optimized such that the tools never leave the field of view, and (3) avoid unnecessary movement of the camera that may distract/disorient the surgeon. Our future work will apply this algorithm to the real da Vinci hardware.
Download full-text PDF |
Source |
---|
Nat Commun
December 2024
Department of Computer Science, The University of Hong Kong, Pokfulam Rd, Hong Kong SAR, China.
Proper exposure settings are crucial for modern machine vision cameras to accurately convert light into clear images. However, traditional auto-exposure solutions are vulnerable to illumination changes, splitting the continuous acquisition of unsaturated images, which significantly degrades the overall performance of underlying intelligent systems. Here we present the neuromorphic exposure control (NEC) system.
View Article and Find Full Text PDFSci Rep
December 2024
School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
Autonomous vehicles, often known as self-driving cars, have emerged as a disruptive technology with the promise of safer, more efficient, and convenient transportation. The existing works provide achievable results but lack effective solutions, as accumulation on roads can obscure lane markings and traffic signs, making it difficult for the self-driving car to navigate safely. Heavy rain, snow, fog, or dust storms can severely limit the car's sensors' ability to detect obstacles, pedestrians, and other vehicles, which pose potential safety risks.
View Article and Find Full Text PDFJ Imaging
November 2024
Cerema, Research Team "Intelligent Transport Systems", 8-10 Rue Bernard Palissy, CEDEX 2, F-63017 Clermont-Ferrand, France.
Accurate luminance-based image generation is critical in physically based simulations, as even minor inaccuracies in radiative transfer calculations can introduce noise or artifacts, adversely affecting image quality. The radiative transfer simulator, SWEET, uses a backward Monte Carlo approach, and its performance is analyzed alongside other simulators to assess how Monte Carlo-induced biases vary with parameters like optical thickness and medium anisotropy. This work details the advancements made to SWEET since the previous publication, with a specific focus on a more comprehensive comparison with other simulators such as Mitsuba.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
Haze can significantly reduce visibility and contrast of images captured outdoors, necessitating the enhancement of images. This degradation in image quality can adversely affect various applications, including autonomous driving, object detection, and surveillance, where poor visibility may lead to navigation errors and obscure crucial details. Existing dehazing techniques face several challenges: spatial methods tend to be computationally heavy, transform methods often fall short in quality, hybrid methods can be intricate and demanding, and deep learning methods require extensive datasets and computational power.
View Article and Find Full Text PDFFront Robot AI
December 2024
School of Electrical and Electronic Engineering, University of Sheffield, Sheffield, United Kingdom.
This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!