Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6338951PMC
http://dx.doi.org/10.3390/s19010053DOI Listing

Publication Analysis

Top Keywords

depth motion
16
optical flow
16
depth
8
hardware architecture
8
motion algorithm
4
algorithm hardware
4
architecture smart
4
smart cameras
4
cameras applications
4
applications autonomous
4

Similar Publications

Background: People with the chronic disease Multiple Sclerosis are subjected to different degrees of profound uncertainty. Uncertainty has been linked to adverse psychological effects such as feelings of heightened vulnerability, avoidance of decision-making, fear, worry, anxiety disorders, and even depression. Research into Multiple Sclerosis has a predominant focus on the scientific, practical, and psychosocial issues of uncertainty.

View Article and Find Full Text PDF

Falls and balance impairment; what and how has this been measured in adults with joint hypermobility? A scoping review.

BMC Musculoskelet Disord

January 2025

The Nick Davey Laboratory, Division of Surgery, Department of Surgery and Cancer, Faculty of Medicine, Sir Michael Uren Hub, Imperial College London, White City Campus, 86 Wood Lane, London, W12 0BZ, UK.

Background: People with joint hypermobility have excessive joint flexibility, which is more common in young women. The people with symptomatic hypermobility report poor balance and even falls. This scoping review aims to identify and map the available evidence related to balance and falling in adults with joint hypermobility to support research planning and ideas for treatment direction.

View Article and Find Full Text PDF

The Laser Interferometer Space Antenna (LISA) mission is designed to detect space gravitational wave sources in the millihertz band. A critical factor in the success of this mission is the residual acceleration noise metric of the internal test mass (TM) within the ultra-precise inertial sensors. Existing studies indicate that the coupling effects of residual gas and temperature gradient fluctuations significantly influence this metric, primarily manifesting as the radiometer effect and the outgassing effect.

View Article and Find Full Text PDF

: Measuring joint range of motion (ROM) is essential for diagnosing and treating musculoskeletal diseases. However, most clinical measurements are conducted using conventional devices, and their reliability may significantly depend on the tester. This study implemented an RGB-D (red/green/blue-depth) sensor-based artificial intelligence (AI) device to measure joint ROM and compared its reliability with that of a universal goniometer (UG).

View Article and Find Full Text PDF

Objects project different images when viewed from varying locations, but the visual system can correct perspective distortions and identify objects across viewpoints. This study investigated the conditions under which the visual system allocates computational resources to construct view-invariant, extraretinal representations, focusing on planar symmetry. When a symmetrical pattern lies on a plane, its symmetry in the retinal image is degraded by perspective.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!