Skip to content

4) Nerf Editing: Relighting, geometry extraction and scene segmentation

lightfield botanist edited this page Aug 16, 2023 · 24 revisions

Table of contents generated with markdown-toc

Shading/BRDF and light extraction

Over the past year (2020), we’ve learned how to make the rendering process differentiable, and turn it into a deep learning module. This sparks the imagination, because the deep learning motto is: “If it’s differentiable, we can learn through it”. If we know how to differentially go from 3D to 2D, it means we can use deep learning and backpropagation to go back from 2D to 3D as well.

Inverse lighting is a hard and unconstrained problem with ambiguities among shape, reactance, and lighting. Nerfs are the combined result of shading and lighting so to recover light from scene we also learn about scene shading.

The rendering equation (published in 1986)

Lets go back to the rendering equation, which describes physical light transport for a single camera or the human vision. A point in the scene is imaged by measuring the emitted and reflected light that converges on the sensor plane. Radiance (L) represents the ray strength, measuring the combined angular and spatial power densities. Radiance can be used to indicate how much of the power emitted by the light source that is reflected, transmitted or absorbed by a surface will be captured by a camera facing that surface from a specified angle of view.

If we solve inverse lighting we also automatically know about inverse shading (called the BRDF).

Relighting with 4D Incident Light Fields

It is possible to re-light and de-light real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.

Another dimension in which NeRF-style methods have been augmented is in how to deal with lighting, typically through latent codes that can be used to re-light a scene. NeRF-W was one of the first follow-up works on NeRF, and optimizes a latent appearance code to enable learning a neural scene representation from less controlled multi-view collections.

Neural Reflectance Fields improve on NeRF by adding a local reflection model in addition to density. It yields impressive relighting results, albeit from single point light sources. NeRV uses a second “visibility” MLP to support arbitrary environment lighting and “one-bounce” indirect illumination.

NeRFNeR: Neural Radiance Fields with Reflections

https://bennyguo.github.io/nerfren/ image

Source: https://en.wikipedia.org/wiki/Light_stage

Source:

Source: Advances in Neural Rendering, https://www.neuralrender.com/

NeRD: Neural Reflectance Decomposition from Image Collections, 2021

NeRD is a method that can decompose image collections from multiple views taken under varying or fixed illumination conditions. The object can be rotated, or the camera can turn around the object. The result is a neural volume with an explicit representation of the appearance and illumination in the form of the BRDF and Spherical Gaussian (SG) environment illumination.

https://markboss.me/publication/2021-nerd/

Neural PIL, 2022

Same authors as NerD: Neural PIL https://github.com/cgtuebingen/Neural-PIL.git - tensorflow

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination, 2021

* https://people.csail.mit.edu/xiuming/projects/nerfactor/ * https://github.com/google/nerfactor

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting, 2021

NVIDIA DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer, 2021

https://nv-tlabs.github.io/DIBRPlus/ Also used for generative code in https://nv-tlabs.github.io/GET3D/

image

DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting, 2023

http://geometrylearning.com/DE-NeRF/ image

Global illumination, 2022

Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination, 2022

Main limitation of all other methods is the simplified shading model, not accounting for global illumination or shadows. Once the BSDF is obtained, the path tracer can be used to synthesize one-light-at-a-time (OLAT) renderings of the scene

we show results of our method on real scenes of the DTU dataset [10]. We can successfully synthesize high-quality novel views and plausible relighting. This shows that our method is robust to such real world captures, which are very challenging due to the lack of very precise camera calibration and foreground segmentation, camera noise, and other effects that are typically not present in synthetic datasets

Efficient and Differentiable Shadow Computation for Inverse Problems, 2022

Other

Comparison of methods for inverse scene lighting

Ours refers to Nvdiffrast here

Full scene de-compositing

NVIDIA Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering, 2022

SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections, 2022

Same authors as Nerd paper above

https://half-potato.gitlab.io/posts/nmf/ Neural Microfacet Fields - Alexander Mai's Homepage A method for recovering materials, geometry (volumetric density), and environmental illumination from a collection of images of a scene.

NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient Illumination, 2023

abs: https://buff.ly/41JfMmh project page: https://buff.ly/43NImF0 image

IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis, 2022

2022 nvdiffrec: Textured meshes form scenes

https://github.com/NVlabs/nvdiffrecmc See https://github.com/3a1b2c3/seeingSpace/wiki/Hands-on:-Getting-started-and-Nerf-frameworks#nvdiffrec--mesh-and-light-reconstruction-from-images

Editing and In painting

Samsung SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields

https://spinnerf3d.github.io/ image

Segmentation of 3D Scenes

NeSF: Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes

Fine-Grained Entity Segmentation

Palett Nerf, 2022

image

LERF: Language Embedded Radiance Fields

  • Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D

abs: https://buff.ly/42nMomv project page: https://buff.ly/42nMpXB

image

Reference-guided Controllable Inpainting of Neural Radiance Fields

abs: https://buff.ly/3KVoUNX project page: https://buff.ly/3UPwjmq

Reference-guided Controllable Inpainting of Neural Radiance Fields, 2023

abs: https://buff.ly/3KVoUNX project page: https://buff.ly/3UPwjmq image

Conversion to geometry

The neural network can also be converted to mesh in certain circumstances https://github.com/bmild/nerf/blob/master/extract_mesh.ipynb), we need to first infer which locations are occupied by the object. This is done by first create a grid volume in the form of a cuboid covering the whole object, then use the nerf model to predict whether a cell is occupied or not. This is the main reason why mesh construction is only available for 360 inward-facing scenes as forward facing scenes.

Mesh based rendering has been around long and gpus are optimized for it.

Gap filling with diffusion, 2023

Deceptive-NeRF: Enhancing NeRF Reconstruction using Pseudo-Observations from Diffusion Models https://arxiv.org/format/2305.15171

NeuralEditor: Editing Neural Radiance Fields via Manipulating Point Clouds, 2023

https://immortalco.github.io/NeuralEditor/ image

#nerf #vfx more a swiss knife #slowmotion, 3d #stabilsation #stereo and the kitchen sink https://lnkd.in/gaDFaNQq

DynIBaR Neural Dynamic Image-Based Rendering, 2023

Retiming, Stabilization, stereo with nerf reconstruction https://dynibar.github.io/

image

TSDF (truncated signed distance function) Fusion

is a meshing algorithm that uses depth maps to extract a surface as a mesh. This method works for all models.

Poisson surface reconstruction

nerf2mesh, 2023

better licensing pytorch https://github.com/3a1b2c3/nerf2mesh

image image

Neural Microfacet Fields for Inverse Rendering 2023

https://half-potato.gitlab.io/posts/nmf/ Neural Microfacet Fields - Alexander Mai's Homepage A method for recovering materials, geometry (volumetric density), and environmental illumination from a collection of images of a scene.

https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once image

Relighting Neural Radiance Fields with Shadow and Highlight Hints, SIGGRAPH 2023

https://nrhints.github.io/ image

Clone this wiki locally