You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The gradient scaling makes the hash grid encoder train to a sharper result in the first pass, most notably on models with high frequency textures. It is not a crucial optimization however, as the second pass, when we switch to standard 2D textures, usually increases the sharpness regardless.
We use only the encoder from tcnn for compatibility reasons. The cutlass MLP:s were not compatible with some older servers we used in the development, so with a vanilla PyTorch MLP, we could deploy the code on more machines. It is very easy to switch back to the tcnn MLP (or preferably, the combined encoder+MLP from tcnn) if you want.
Hello,
Thanks for releasing the code for this amazing work!
I've been going through the code and find some implementations in TextureMLP3D confusing:
gradient_scaling=1.0
and did not find much difference in the output.Thanks!
The text was updated successfully, but these errors were encountered: