TSFF: a two-stage fusion framework for 3D object detection.

PeerJ Comput Sci

The School of Computer Engineering, Jimei University, Xiamen, Fujian, China.

Published: August 2024

Point clouds are highly regarded in the field of 3D object detection for their superior geometric properties and versatility. However, object occlusion and defects in scanning equipment frequently result in sparse and missing data within point clouds, adversely affecting the final prediction. Recognizing the synergistic potential between the rich semantic information present in images and the geometric data in point clouds for scene representation, we introduce a two-stage fusion framework (TSFF) for 3D object detection. To address the issue of corrupted geometric information in point clouds caused by object occlusion, we augment point features with image features, thereby enhancing the reference factor of the point cloud during the voting bias phase. Furthermore, we implement a constrained fusion module to selectively sample voting points using a 2D bounding box, integrating valuable image features while reducing the impact of background points in sparse scenes. Our methodology was evaluated on the SUNRGB-D dataset, where it achieved a 3.6 mean average percent (mAP) improvement in the mAP@0.25 evaluation criterion over the baseline. In comparison to other great 3D object detection methods, our method had excellent performance in the detection of some objects.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419641PMC
http://dx.doi.org/10.7717/peerj-cs.2260DOI Listing

Publication Analysis

Top Keywords

object detection
16
point clouds
16
two-stage fusion
8
fusion framework
8
object occlusion
8
data point
8
image features
8
object
6
point
6
detection
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!