RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration

  • Jiuming Liu ,
  • Guangming Wang ,
  • Zhe Liu ,
  • Chaokang Jiang ,
  • ,
  • Hesheng Wang

ICCV 2023 |

Related File

Although point cloud registration has achieved remarkable advances in object-level and indoor scenes, largescale registration methods are rarely explored. Challenges mainly arise from the huge point number, complex distribution, and outliers of outdoor LiDAR scans. In addition, most existing registration works generally adopt a twostage paradigm: They first find  correspondences by extracting discriminative local features and then leverage estimators (eg. RANSAC) to filter outliers, which are highly dependent on well-designed descriptors and post-processing choices. To address these problems, we propose an endto-end transformer network (RegFormer) for large-scale point cloud alignment without any further post-processing.
Specifically, a projection-aware hierarchical transformer is proposed to capture long-range dependencies and filter outliers by extracting point features globally. Our transformer has linear complexity, which guarantees high efficiency even for large-scale scenes. Furthermore, to effectively reduce mismatches, a bijective association transformer is designed for regressing the initial transformation. Extensive experiments on KITTI and NuScenes datasets demonstrate that our RegFormer achieves competitive performance in terms of both accuracy and efficiency. Codes are available at https://github.com/IRMVLab/RegFormer.