In many robotic applications, creating a map is crucial, and 3D maps provide a method for estimating the positions of other objects or obstacles. Most of the previous research processes 3D point clouds through projection-based or voxel-based models, but both approaches have certain limitations. This paper proposes a hybrid localization and mapping method using stereo vision and LiDAR. Unlike the traditional single-sensor systems, we construct a pose optimization model by matching ground information between LiDAR maps and visual images. We use stereo vision to extract ground information and fuse it with LiDAR tensor voting data to establish coplanarity constraints. Pose optimization is achieved through a graph-based optimization algorithm and a local window optimization method. The proposed method is evaluated using the KITTI dataset and compared against the ORB-SLAM3, F-LOAM, LOAM, and LeGO-LOAM methods. Additionally, we generate 3D point cloud maps for the corresponding sequences and high-definition point cloud maps of the streets in sequence 00. The experimental results demonstrate significant improvements in trajectory accuracy and robustness, enabling the construction of clear, dense 3D maps.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11548508 | PMC |
http://dx.doi.org/10.3390/s24216828 | DOI Listing |
Biomimetics (Basel)
December 2024
School of Artificial Intelligence, Tongmyong University, Busan 48520, Republic of Korea.
Depth estimation plays a pivotal role in advancing human-robot interactions, especially in indoor environments where accurate 3D scene reconstruction is essential for tasks like navigation and object handling. Monocular depth estimation, which relies on a single RGB camera, offers a more affordable solution compared to traditional methods that use stereo cameras or LiDAR. However, despite recent progress, many monocular approaches struggle with accurately defining depth boundaries, leading to less precise reconstructions.
View Article and Find Full Text PDFBiomimetics (Basel)
December 2024
School of Mechanical Engineering and Automation, Harbin Institute of Technology Shenzhen, Shenzhen 518055, China.
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static calibration based on camera rotation angles with dynamic updates of extrinsic parameters, the method leverages relative pose adjustments between the rotation axis and cameras to update extrinsic parameters continuously in real-time. It facilitates epipolar rectification as the FOV changes, and enables precise disparity computation and accurate depth information acquisition.
View Article and Find Full Text PDFSensors (Basel)
December 2024
KIS*MED (AI Systems in Medicine), Technische Universität Darmstadt, 64283 Darmstadt, Germany.
In recent years, significant research has been conducted on video-based human pose estimation (HPE). While monocular two-dimensional (2D) HPE has been shown to achieve high performance, monocular three-dimensional (3D) HPE poses a more challenging problem. However, since human motion happens in a 3D space, 3D HPE offers a more accurate representation of the human, granting increased usability for complex tasks like analysis of physical exercise.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Information and Communication Engineering, Korea University of Technology and Education (KOREATECH), Cheonan-si 31253, Republic of Korea.
This paper presents a novel method to enhance ground truth disparity maps generated by Semi-Global Matching (SGM) using Maximum a Posteriori (MAP) estimation. SGM, while not producing visually appealing outputs like neural networks, offers high disparity accuracy in valid regions and avoids the generalization issues often encountered with neural network-based disparity estimation. However, SGM struggles with occlusions and textureless areas, leading to invalid disparity values.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Faculty of Engineering and Applied Sciences, Cranfield University, College Road, Bedford MK43 0AL, UK.
The use of drones or Unmanned Aerial Vehicles (UAVs) and other flying vehicles has increased exponentially in the last decade. These devices pose a serious threat to helicopter pilots who constantly seek to maintain situational awareness while flying to avoid objects that might lead to a collision. In this paper, an Airborne Visual Artificial Intelligence System is proposed that seeks to improve helicopter pilots' situational awareness (SA) under UAV-congested environments.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!