The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5-7 cm). These results are consistent with those obtained for other PMTS approaches.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5876751PMC
http://dx.doi.org/10.3390/s18030827DOI Listing

Publication Analysis

Top Keywords

omnidirectional camera
8
camera off-the-shelf
8
off-the-shelf navigation
8
navigation sensors
8
mobile terrestrial
8
mobile mapping
8
mapping applications
8
data acquisition
8
low-cost pmts
8
pmts
6

Similar Publications

Depth estimation is a fundamental task in many vision applications. With the popularity of omnidirectional cameras, it becomes a new trend to tackle this problem in the spherical space. In this paper, we propose a learning-based method for predicting dense depth values of a scene from a monocular omnidirectional image.

View Article and Find Full Text PDF

This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera's field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes.

View Article and Find Full Text PDF

Traditional multilayer antireflection (AR) surfaces are of significant importance for numerous applications, such as laser optics, camera lenses, and eyeglasses. Recently, technological advances in the fabrication of biomimetic AR surfaces capable of delivering broadband omnidirectional high transparency combined with self-cleaning properties have opened an alternative route toward realization of multifunctional surfaces which would be beneficial for touchscreen displays or solar harvesting devices. However, achieving the desired surface properties often requires sophisticated lithography fabrication methods consisting of multiple steps.

View Article and Find Full Text PDF

Currently, the practical implementations of panoramic cameras range from vehicle navigation to space studies due to their 360-degree imaging capability in particular. In this variety of uses, it is possible to calculate three-dimensional coordinates from a panoramic image, especially using the Direct Linear Transformation (DLT) method. There are several types of omnidirectional cameras which can be classified mainly as central and non-central cameras for 360-degree imaging.

View Article and Find Full Text PDF

Ground target detection and positioning systems based on lightweight unmanned aerial vehicles (UAVs) are increasing in value for aerial reconnaissance and surveillance. However, the current method for estimating the target's position is limited by the field of view angle, rendering it challenging to fulfill the demands of a real-time omnidirectional reconnaissance operation. To address this issue, we propose an Omnidirectional Optimal Real-Time Ground Target Position Estimation System (Omni-OTPE) that utilizes a fisheye camera and LiDAR sensors.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!