Skip to content

MrIsland/F2M_Reg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

F2M-Reg

F2M-Reg: Unsupervised RGB-D Point Cloud Registration with Frame-to-Model Optimization

Zhinan yu, Zheng Qin, Yijie Tang, Yongjun Wang, Renjiao Yi, Chenyang Zhu, Kai Xu

arXiv Project page Dataset

Introduction

This work studies the problem of unsupervised RGB-D point cloud registration, which aims at training a robust registration model without ground-truth pose supervision. Existing methods usually leverages unposed RGB-D sequences and adopt a frame-to-frame framework based on differentiable rendering to train the registration model, which enforces the photometric and geometric consistency between the two frames for supervision. However, this frame-to-frame framework is vulnerable to inconsistent factors between different frames, e.g., lighting changes, geometry occlusion, and reflective materials, which leads to suboptimal convergence of the registration model. In this paper, we propose a novel frame-to-model optimization framework named F2M-Reg for unsupervised RGB-D point cloud registration. We leverage the neural implicit field as a global model of the scene and optimize the estimated poses of the frames by registering them to the global model, and the registration model is subsequently trained with the optimized poses. Thanks to the global encoding capability of neural implicit field, our frame-to-model framework is significantly more robust to inconsistent factors between different frames and thus can provide better supervision for the registration model. Besides, we demonstrate that F2M-Reg can be further enhanced by a simplistic synthetic warming-up strategy. To this end, we construct a photorealistic synthetic dataset named Sim-RGBD to initialize the registration model for the frame-to-model optimization on real-world RGB-D sequences. Extensive experiments on four challenging benchmarks have shown that our method surpasses the previous state-of-the-art counterparts by a large margin, especially under scenarios with severe lighting changes and low overlap.

If you find this repository useful to your research or work, it is really appreciated to star this repository✨ and cite our paper 📚.

Feel free to contact me (zn_yu@nudt.edu.cn) or open an issue if you have any questions or suggestions.

📢 News

  • 2025-05-01: The paper is available on arXiv.

📋 TODO

  • Release the training and evaluation code.
  • Release the dataset.
  • Release the model.

🔧 Installation

Coming soon.

📊 Dataset

Coming soon.

👀 Visual Results

ScanNet & 3DMatch

ScanNet ++

📚 Citation

If you find our work helpful, please consider citing:

@misc{yu2025f2mregunsupervisedrgbdpoint,
      title={F2M-Reg: Unsupervised RGB-D Point Cloud Registration with Frame-to-Model Optimization}, 
      author={Zhinan Yu and Zheng Qin and Yijie Tang and Yongjun Wang and Renjiao Yi and Chenyang Zhu and Kai Xu},
      year={2025},
      eprint={2405.00507},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2405.00507}, 
}

About

Official PyTorch Implementation of "F2M-Reg: Unsupervised RGB-D registration with Frame-to-Model Optimization“

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors