Abstract

Recent advances in implicit neural representation have demonstrated the ability to recover detailed geometry and material from multi-view images. However, the use of simplified lighting models such as environment maps to represent non-distant illumination, or using a network to fit indirect light modeling without a solid basis, can lead to an undesirable decomposition between lighting and material. To address this, we propose a fully differentiable framework named neural ambient illumination (NeAI) that uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way. Together with integral lobe encoding for roughness-adaptive specular lobe and leveraging the pre-convoluted background for accurate decomposition, the proposed method represents a significant step towards integrating physically based rendering into the NeRF representation. The experiments demonstrate the superior performance of novel-view rendering compared to previous works, and the capability to re-render objects under arbitrary NeRF-style environments opens up exciting possibilities for bridging the gap between virtual and real-world scenes. The code and data will be released upon publication.

overview

Problem Statement


Previous approaches that model the environment as an omnidirectional radiance map didn't take the position of 3D environments into account, therefore they are unable to handle ambient occlusion and indirect lighting realistic. To address these problems, we therefore make the following contributions in this paper:
  • Neural Ambient Illumination (NeAI) expresses the radiance field of incoming rays such that treats each sample in the 3D environment as a light emitter.
  • Integrated Lobe Encoding (ILE) enables incoming rays within roughness-adaptive specular lobe to be compactly featurized.
  • Multiscale pre-convoluted representation for background assists in the decomposition of object materials and ambient illumination.

Please refer to our paper for more implementation details.

Illustration of our method

Through our defined ILE and pre-convoluted representaion, we can efficiently compute the integral within the specular lobe by doing a single ray tracing along the reflected direction.
We show that the derived environment map can be gradually blurred with the increasing roughness (on the left), and the further application of material roughness editing (on the right).
Our model decomposes the roughness and diffuse color from ambient illumination that enables editing.
We edit the car’s diffuse color without affecting its glossy paint’s specular reflection.

Results of Plug and Play

Our rendering pipeline produces realistic renderings and detailed specular reflectance when place the decomposed object into a pre-trained NeRF-like environment.
We show the results of putting our material sphere into the "Garden" scene at the top, and another case that put the car into the scene "Stump" is shown below:

The specular reflection smoothly changes according to the relative position of the object.
The real-world object can be also moved into a NeRF-style environment. We show the results of novel-view synthesis (left) and "plug and play" (right). Note that in "novel-view synthesis", the background of the object is taken directly from the ground-truth image.
\
\

Citation

Acknowledgements

The website template was borrowed from Michaël Gharbi.