Publication
Conditional-Flow NeRF: Accurate 3D modelling with reliable uncertainty quantification
Conference Article
Conference
European Conference on Computer Vision (ECCV)
Edition
17th
Pages
540-557
Doc link
https://doi.org/10.1007/978-3-031-20062-5_31
File
Abstract
A critical limitation of current methods based on Neural Radiance Fields (NeRF) is that they are unable to quantify the uncertainty associated with the learned appearance and geometry of the scene. This information is paramount in real applications such as medical diagnosis or autonomous driving where, to reduce potentially catastrophic failures, the confidence on the model outputs must be included into the decision-making process. In this context, we introduce Conditional-Flow NeRF (CF-NeRF), a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches. For this purpose, our method learns a distribution over all possible radiance fields modelling which is used to quantify the uncertainty associated with the modelled scene. In contrast to previous approaches enforcing strong constraints over the radiance field distribution, CF-NeRF learns it in a flexible and fully data-driven manner by coupling Latent Variable Modelling and Conditional Normalizing Flows. This strategy allows to obtain reliable uncertainty estimation while preserving model expressivity. Compared to previous state-of-the-art methods proposed for uncertainty quantification in NeRF, our experiments show that the proposed method achieves significantly lower prediction errors and more reliable uncertainty values for synthetic novel view and depth-map estimation.
Categories
optimisation.
Author keywords
3D, scene representation, uncertainty quantification
Scientific reference
J. Shen, A. Agudo, F. Moreno-Noguer and A. Ruiz. Conditional-Flow NeRF: Accurate 3D modelling with reliable uncertainty quantification, 17th European Conference on Computer Vision, 2022, Tel Aviv (Israel), in Computer Vision – ECCV 2022 , Vol 13666 of Lecture Notes in Computer Science, pp. 540-557, 2022.
Follow us!