The attention mechanism has significantly progressed in various point cloud tasks. Benefiting from its significant competence in capturing long-range dependencies, research in point cloud completion has achieved promising results. However, the typically disordered point cloud data features complicated non-Euclidean geometric structures and exhibits unpredictable behavior. Most current attention modules are based on Euclidean or local geometry, which fails to accurately represent the intrinsic non-Euclidean characteristics of point cloud data. Thus, we propose a novel geodesic attention-based multi-stage refinement transformer network, which enables the alignment of feature dimensions among query, key, and value, and long-range geometric dependencies are captured on the manifold. Then, a novel Position Feature Extractor is designed to enhance geometric features and explicitly capture graph-based non-Euclidean properties of point cloud objects. A Recurrent Information Aggregation Unit is further applied to aggregate historical information from the previous stages and current geometric features to guide the network in the current stage. The proposed method exhibits strong competitiveness when compared to current state-of-the-art methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1038/s41598-025-86704-6 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!