OPONeRF

OPONeRF: One-Point-One NeRF for Robust Neural Rendering

Yu Zheng   Yueqi Duan   Kangfu Zheng    Hongru Yan   Jiwen Lu   Jie Zhou  

Tsinghua University

[Technical Report] [Code] [Dataset]

Abstract & Method

In this paper, we propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering. Existing NeRFs are designed based on a key assumption that the target scene remains unchanged between the training and test time. However, small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes, which lead to significantly defective or failed rendering results even for the recent state-of-the-art generalizable methods. To address this, we propose a divide-and-conquer framework in OPONeRF that adaptively responds to local scene variations via personalizing appropriate point-wise parameters, instead of fitting a single set of NeRF parameters that are inactive to test-time unseen changes. Moreover, to explicitly capture the local uncertainty, we decompose the point representation into deterministic mapping and probabilistic inference. In this way, OPONeRF learns the sharable invariance and unsupervisedly models the unexpected scene variations between the training and testing scenes. To validate the effectiveness of the proposed method, we construct benchmarks from both realistic and synthetic data with diverse test-time perturbations including foreground motions, illumination variations and multi-modality noises, which are more challenging than conventional generalization and temporal reconstruction benchmarks. Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics through benchmark experiments and cross-scene evaluations. We further show the efficacy of the proposed method via experimenting on other existing generalization-based benchmarks and incorporating the idea of One-Point-One NeRF into other advanced baseline methods.



Results





Citation

If you find this useful for your research, please use the following.
@inproceedings{zheng2024oponerf,
  title={OPONeRF: One-Point-One NeRF for Robust Neural Rendering},
  author={Zheng, Yu and Duan, Yueqi and Zheng, Kangfu and Yan, Hongru and Lu, Jiwen and Zhou, Jie},
  journal={arXiv preprint arXiv:2409.20043},
  year={2024}
}


Website Template