The fusion of visual and inertial measurements for motion tracking has become prevalent in the robotic community, due to its complementary sensing characteristics, low cost, and small space requirements. This fusion task is known as the vision-aided inertial navigation system problem. We present a novel hybrid sliding window optimizer to achieve information fusion for a tightly-coupled vision-aided inertial navigation system. It possesses the advantages of both the conditioning-based method and the prior-based method. A novel distributed marginalization method was also designed based on the multi-state constraints method with significant efficiency improvement over the traditional method. The performance of the proposed algorithm was evaluated with the publicly available EuRoC datasets and showed competitive results compared with existing algorithms.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6696157 | PMC |
http://dx.doi.org/10.3390/s19153418 | DOI Listing |
Sensors (Basel)
February 2024
School of the Built Environment and Architecture, London South Bank University, London SE1 0AA, UK.
Walking speed is a significant aspect of evacuation efficiency, and this speed varies during fire emergencies due to individual physical abilities. However, in evacuations, it is not always possible to keep an upright posture, hence atypical postures, such as stoop walking or crawling, may be required for survival. In this study, a novel 3D passive vision-aided inertial system (3D PVINS) for indoor positioning was used to track the movement of 20 volunteers during an evacuation in a low visibility environment.
View Article and Find Full Text PDFISA Trans
June 2023
Department of Mechatronics and Automation, Faculty of Engineering, University of Szeged, Moszkvai krt. 9, 6725 Szeged, Hungary.
This paper proposes a novel observer-based controller for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) designed to directly receive measurements from a Vision-Aided Inertial Navigation System (VA-INS) and produce the required thrust and rotational torque inputs. The VA-INS is composed of a vision unit (monocular or stereo camera) and a typical low-cost 6-axis Inertial Measurement Unit (IMU) equipped with an accelerometer and a gyroscope. A major benefit of this approach is its applicability for environments where the Global Positioning System (GPS) is inaccessible.
View Article and Find Full Text PDFSensors (Basel)
June 2022
School of Marine Science and Technology, Northwestern Polytechnical University, 127 West Youyi Road, Xi'an 710072, China.
The failure of the traditional initial alignment algorithm for the strapdown inertial navigation system (SINS) in high latitude is a significant challenge due to the rapid convergence of polar longitude. This paper presents a novel vision aided initial alignment method for the SINS of autonomous underwater vehicles (AUV) in polar regions. In this paper, we redesign the initial alignment model by combining inertial navigation mechanization equations in a transverse coordinate system (TCS) and visual measurement information obtained from a camera fixed on the vehicle.
View Article and Find Full Text PDFSensors (Basel)
May 2020
Intelligent Robotics Laboratory, Department of Control and Robot Engineering, Chungbuk National University, Chungdae-ro 1, Seowon-Gu, Cheongju-si 28644, Chungbuk, Korea.
Robotic mapping and odometry are the primary competencies of a navigation system for an autonomous mobile robot. However, the state estimation of the robot typically mixes with a drift over time, and its accuracy is degraded critically when using only proprioceptive sensors in indoor environments. Besides, the accuracy of an ego-motion estimated state is severely diminished in dynamic environments because of the influences of both the dynamic objects and light reflection.
View Article and Find Full Text PDFSensors (Basel)
April 2020
Faculty of Aerospace Engineering, The Pennsylvania State University, 229 Hammond Building, University Park, PA 16802, USA.
With the advent of unmanned aerial vehicles (UAVs), a major area of interest in the research field of UAVs has been vision-aided inertial navigation systems (V-INS). In the front-end of V-INS, image processing extracts information about the surrounding environment and determines features or points of interest. With the extracted vision data and inertial measurement unit (IMU) dead reckoning, the most widely used algorithm for estimating vehicle and feature states in the back-end of V-INS is an extended Kalman filter (EKF).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!