The research field of visual-inertial odometry has entered a mature stage in recent years. However, unneglectable problems still exist. Tradeoffs have to be made between high accuracy and low computation for users. In addition, notation confusion exists in quaternion descriptions of rotation; although not fatal, this may results in unnecessary difficulties in understanding for researchers. In this paper, we develop a visual-inertial odometry which gives consideration to both precision and computation. The proposed algorithm is a filter-based solution that utilizes the framework of the noted multi-state constraint Kalman filter. To dispel notation confusion, we deduced the error state transition equation from scratch, using the more cognitive Hamilton notation of quaternion. We further come up with a fully linear closed-form formulation that is readily implemented. As the filter-based back-end is vulnerable to feature matching outliers, a descriptor-assisted optical flow tracking front-end was developed to cope with the issue. This modification only requires negligible additional computation. In addition, an initialization procedure is implemented, which automatically selects static data to initialize the filter state. Evaluations of proposed methods were done on a public, real-world dataset, and comparisons were made with state-of-the-art solutions. The experimental results show that the proposed solution is comparable in precision and demonstrates higher computation efficiency compared to the state-of-the-art.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6515200 | PMC |
http://dx.doi.org/10.3390/s19081941 | DOI Listing |
Sensors (Basel)
December 2024
Institute of Computer Science, Zurich University of Applied Sciences, 8400 Winterthur, Switzerland.
Simultaneous localization and mapping (SLAM) techniques can be used to navigate the visually impaired, but the development of robust SLAM solutions for crowded spaces is limited by the lack of realistic datasets. To address this, we introduce InCrowd-VI, a novel visual-inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments. Recorded using Meta Aria Project glasses, it captures realistic scenarios without environmental control.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114, Iran.
In today's technologically advanced landscape, precision in navigation and positioning holds paramount importance across various applications, from robotics to autonomous vehicles. A common predicament in location-based systems is the reliance on Global Positioning System (GPS) signals, which may exhibit diminished accuracy and reliability under certain conditions. Moreover, when integrated with the Inertial Navigation System (INS), the GPS/INS system could not provide a long-term solution for outage problems due to its accumulated errors.
View Article and Find Full Text PDFEvent-based cameras offer unique advantages over traditional cameras, such as high dynamic range, absence of motion blur, and microsecond-level latency. This paper introduces an innovative approach to visual odometry, to our knowledge, by integrating the newly proposed Rotated Binary DART (RBD) descriptor within a Visual-Inertial Navigation System (VINS)-based event visual odometry framework. Our method leverages event optical flow and RBD for precise feature selection and matching, ensuring robust performance in dynamic environments.
View Article and Find Full Text PDFSensors (Basel)
November 2024
School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China.
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome these issues to develop an integrated localization and navigation framework, IIVL-LM (IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping).
View Article and Find Full Text PDFSensors (Basel)
November 2024
School of Computer Science and Technology, North University of China, Taiyuan 030051, China.
Simultaneous Localization And Mapping (SLAM) algorithms play a critical role in autonomous exploration tasks requiring mobile robots to autonomously explore and gather information in unknown or hazardous environments where human access may be difficult or dangerous. However, due to the resource-constrained nature of mobile robots, they are hindered from performing long-term and large-scale tasks. In this paper, we propose an efficient multi-robot dense SLAM system that utilizes a centralized structure to alleviate the computational and memory burdens on the agents (i.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!