In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8954836 | PMC |
http://dx.doi.org/10.3390/s22062221 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!