We present a novel differentiable rendering framework for joint geometry, material, and lighting estimation from multi-view images. In contrast to previous methods which assume a simplified environment map or co-located flashlights, in this work, we formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF). The key insight of the proposed method is the union of the incident and outgoing light fields through physically-based rendering and inter-reflections between surfaces, making it possible to disentangle the scene geometry, material, and lighting from image observations in a physically-based manner. The proposed incident light and inter-reflection framework can be easily applied to other NeRF systems. We show that our method can not only decompose the outgoing radiance into incident lights and surface materials, but also serve as a surface refinement module that further improves the reconstruction detail of the neural surface. We demonstrate on several datasets that the proposed method is able to achieve state-of-the-art results in terms of the geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
As the commonly used LDR images are usually processed by unknown non-linear tone-mapping, gamma correction, and value clipping, it may result in inaccurate material and lighting estimation if we directly supervise the rendering value with the LDR ground truth.
Therefore, we constructed a real-world linear HDR dataset (by using Apple Internal tools) for material estimation and neural rendering related tasks. Comparisons between reconstructions using LDR and HDR images are visualized below.
Acknowledgements: The website template was borrowed from Lior Yariv. Image sliders are based on dics.