Estimating the depth of image and egomotion of agent are important for autonomous and robot in understanding the surrounding environment and avoiding collision. Most existing unsupervised methods estimate depth and camera egomotion by minimizing photometric error between adjacent frames. However, the photometric consistency sometimes does not meet the real situation, such as brightness change, moving objects and occlusion. To reduce the influence of brightness change, we propose a feature pyramid matching loss (FPML) which captures the trainable feature error between a current and the adjacent frames and therefore it is more robust than photometric error. In addition, we propose the occlusion-aware mask (OAM) network which can indicate occlusion according to change of masks to improve estimation accuracy of depth and camera pose. The experimental results verify that the proposed unsupervised approach is highly competitive against the state-of-the-art methods, both qualitatively and quantitatively. Specifically, our method reduces absolute relative error (Abs Rel) by 0.017-0.088.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7866542 | PMC |
http://dx.doi.org/10.3390/s21030923 | DOI Listing |
High-resolution depth imaging is essential in fields such as biological microscopy and material science. Traditional techniques like interferometry and holography often rely on phase stability and coherence, making them susceptible to noise and limiting their effectiveness in low-light conditions. We propose a time-of-flight (ToF) widefield microscopy technique that uses pseudo-thermal light.
View Article and Find Full Text PDFNeurophotonics
January 2025
University of Kentucky, Department of Biomedical Engineering, Lexington, Kentucky, United States.
Significance: Cerebral blood flow (CBF) imaging is crucial for diagnosing cerebrovascular diseases. However, existing large neuroimaging techniques with high cost, low sampling rate, and poor mobility make them unsuitable for continuous and longitudinal CBF monitoring at the bedside.
Aim: We aimed to develop a low-cost, portable, programmable scanning diffuse speckle contrast imaging (PS-DSCI) technology for fast, high-density, and depth-sensitive imaging of CBF in rodents.
J Biomed Opt
January 2025
The Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States.
Significance: Laparoscopic surgery presents challenges in localizing oncological margins due to poor contrast between healthy and malignant tissues. Optical properties can uniquely identify various tissue types and disease states with high sensitivity and specificity, making it a promising tool for surgical guidance. Although spatial frequency domain imaging (SFDI) effectively measures quantitative optical properties, its deployment in laparoscopy is challenging due to the constrained imaging environment.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Centre for Automation and Robotics (CAR UPM-CSIC), Escuela Técnica Superior de Ingeniería y Diseño Industrial (ETSIDI), Universidad Politécnica de Madrid, Ronda de Valencia 3, 28012 Madrid, Spain.
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human gait analysis systems. This paper presents a comprehensive review of the advancements and recent findings in the field of vision-based human gait analysis systems over the past five years, with a special emphasis on the role of vision sensors, machine learning algorithms, and technological innovations.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Mechanical and Intelligent Systems Engineering, The University of Electro-Communications, Tokyo 1828585, Japan.
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a multi-fingered robotic hand with proximity sensors is developed. To achieve self-contained autonomous navigation to a targeted object, onboard tracking and depth cameras are used to detect the targeted object and to control the UAV to reach the target object, even in a Global Positioning System-denied environment.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!