-
Notifications
You must be signed in to change notification settings - Fork 442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any plans for double backward / second-order gradients ? i.e. backward for backward functions. #58
Comments
Hi there, thanks for the kind words! :) Unfortunately, supporting second-order derivatives throughout the entire framework would be quite an undertaking. While nice-to-have, it's not a high priority at the moment. Still, I'm going to leave this issue open as a TODO marker. Cheers! |
Hi, First huge thanks for the nice paper and pytorch extension. About this issue - is there any chance you can make the hash encoding module twice differentiable? I.e. support calling backward on (d_encoding/d_input_coords)? This should be easier to implement than full MLP and would still be useful in scenarios like sequentializing hash encoding and pytorch MLPs to support SDF gradients. Thanks |
Hi @za-cheng , my custom implementation of i.e. from I'm cleaning codes and willing to submit a PR here soon within this week. :) |
Great thanks @ventusff I'll keep an eye out for it. |
Hi @za-cheng , the PR #69 is submited 😄 ! I managed to add a partial support for second-order derivatives, only for After compling this implementation, you can try my toy SDF training script BR, |
Hi @ventusff Thanks so much for the PR. I tested the script however got a cuda illegal memory access error with Best, |
@za-cheng Fixed now. You can pull and compile again :) |
@ventusff 's PR got merged since |
@ventusff Thank you so much for your work. Are you still doing the double backwards for the fully_fused_mlp.cu? I would really like to test it on my thesis project. |
Hi,
First of all, thanks for the great repo! I've already built a project based on tcnn and found it extremely helpful.
However during usage, I found out that since the backward functions are c++ implemented, they are not trackable by pytorch, causing
autograd.grad(..., create_graph=True)
fails to generategrad_fn
for grads (i.e. second-order gradients).This functionality is helpful when training and losses are related to first-order gradients. For example, when training a SDF MLP, typically a eikonal loss will be used, which is a loss applied on
dy_dx
(nablas) of the network. To achieve this, ad(dy_dx)_dparam
is needed.Ref: https://arxiv.org/abs/2002.10099
Fig:
Currently I'm writing custom
backward_backward
functions upon tcnn'sgrid.h
andfully_fused_mlp.cu
, but it would be really nice if this could be officially supported. 😄BR,
Ventus
🎉🎉🎉 UPDATE: to all people who reach here
For now, a partial support for double backward and only for grid encodings is implemented within the tiny-cuda-nn repo.
Example usage script could be found here.
For implementation details, please check the original PR #69 .
The text was updated successfully, but these errors were encountered: