F2M-Reg: Unsupervised RGB-D Point Cloud Registration with Frame-to-Model Optimization
Zhinan yu, Zheng Qin, Yijie Tang, Yongjun Wang, Renjiao Yi, Chenyang Zhu, Kai Xu
This work studies the problem of unsupervised RGB-D point cloud registration, which aims at training a robust registration model without ground-truth pose supervision. Existing methods usually leverages unposed RGB-D sequences and adopt a frame-to-frame framework based on differentiable rendering to train the registration model, which enforces the photometric and geometric consistency between the two frames for supervision. However, this frame-to-frame framework is vulnerable to inconsistent factors between different frames, e.g., lighting changes, geometry occlusion, and reflective materials, which leads to suboptimal convergence of the registration model. In this paper, we propose a novel frame-to-model optimization framework named F2M-Reg for unsupervised RGB-D point cloud registration. We leverage the neural implicit field as a global model of the scene and optimize the estimated poses of the frames by registering them to the global model, and the registration model is subsequently trained with the optimized poses. Thanks to the global encoding capability of neural implicit field, our frame-to-model framework is significantly more robust to inconsistent factors between different frames and thus can provide better supervision for the registration model. Besides, we demonstrate that F2M-Reg can be further enhanced by a simplistic synthetic warming-up strategy. To this end, we construct a photorealistic synthetic dataset named Sim-RGBD to initialize the registration model for the frame-to-model optimization on real-world RGB-D sequences. Extensive experiments on four challenging benchmarks have shown that our method surpasses the previous state-of-the-art counterparts by a large margin, especially under scenarios with severe lighting changes and low overlap.

If you find this repository useful to your research or work, it is really appreciated to star this repository✨ and cite our paper 📚.
Feel free to contact me (zn_yu@nudt.edu.cn) or open an issue if you have any questions or suggestions.
- 2025-05-01: The paper is available on arXiv.
- Release the training and evaluation code.
- Release the dataset.
- Release the model.
Coming soon.
Coming soon.
If you find our work helpful, please consider citing:
@misc{yu2025f2mregunsupervisedrgbdpoint,
title={F2M-Reg: Unsupervised RGB-D Point Cloud Registration with Frame-to-Model Optimization},
author={Zhinan Yu and Zheng Qin and Yijie Tang and Yongjun Wang and Renjiao Yi and Chenyang Zhu and Kai Xu},
year={2025},
eprint={2405.00507},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2405.00507},
}
