SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes

CVPR 2023

Large scale online photorealistic reconstruction of indoor scenes.

ARC Lab, Tencent PCG
overview_image

Abstract

Online reconstructing and rendering of large-scale indoor scenes is a long-standing challenge. SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results. While NeRF-based methods produce promising novel view synthesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently handling online input. Inspired by the complementary advantages of classical 3D reconstruction and NeRF, we thus investigate marrying explicit geometric representation with NeRF rendering to achieve efficient online reconstruction and high-quality rendering. We introduce SurfelNeRF, a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geometric attributes and extracted appearance features from input images. We further extend the conventional surfel-based fusion scheme to progressively integrate incoming input frames into the reconstructed global neural scene representation. In addition, we propose a highly-efficient differentiable rasterization scheme for rendering neural surfel radiance fields, which helps SurfelNeRF achieve 10× speedups in training and inference time, respectively. Experimental results show that our method achieves the state-of-the-art 23.82 PSNR and 29.58 PSNR on ScanNet in feedforward inference and per-scene optimization settings, respectively.

Pipeline

overview_image

Overview of our SurfelNeRF. Given an online input stream of image sequences, we first reconstruct a surfel representation associated with neural features to build a local neural surfel radiance field for each input keyframe. Then the neural surfel radiance field integration is used to fuse the input local neural surfel radiance field into the global neural surfel radiance field by updating both surfel position and features. More specifically, input local neural surfels associated with global surfels are fused to global surfels with corresponding global ones, and remaining local surfels without corresponding ones are added to the global model. Furthermore, the novel views can be rendered from updated global surfels via our efficient rasterization-guided render. Our proposed rasterization-guided renderer renders color only on the intersection points of ray and surfels, which is faster than volume rendering.

feat_table

Comparison of representation and features with existing methods.

Results

ScanNet

BibTeX

@article{gao2023surfelnerf,
  author    = {Gao, Yiming and Cao, Yan-Pei and Shan, Ying },
  title     = {SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes},
  journal   = {CVPR},
  year      = {2023},
}