Multisensor fusion-based road segmentation plays an important role in the intelligent driving system since it provides a drivable area. The existing mainstream fusion method is mainly to feature fusion in the image space domain which causes the perspective compression of the road and damages the performance of the distant road. Considering the bird's eye views (BEVs) of the LiDAR remains the space structure in the horizontal plane, this article proposes a bidirectional fusion network (BiFNet) to fuse the image and BEV of the point cloud. The network consists of two modules: 1) the dense space transformation (DST) module, which solves the mutual conversion between the camera image space and BEV space and 2) the context-based feature fusion module, which fuses the different sensors information based on the scenes from corresponding features. This method has achieved competitive results on the KITTI dataset.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2021.3105488 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!