Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the proposed deferred back-propagation be accomplished by Pytorch #11

Closed
chobao opened this issue Oct 8, 2022 · 4 comments
Closed

Comments

@chobao
Copy link

chobao commented Oct 8, 2022

Hi, Kai-46, thanks for the impressive work! The proposed Deferred back-propagation method can render a full-resolution image in one batch. However, the back-propagation of the released code is implemented by the CUDA. Is it possible to accomplish it by Pytorch?

@Kai-46
Copy link
Owner

Kai-46 commented Oct 10, 2022

Yes, you could reference our ARF-TensoRF implementation which are purely in python: Google drive

@chobao
Copy link
Author

chobao commented Oct 12, 2022

Wow, Thanks for sharing the code! I read the amazing code and the answer is clear. However, there is one thing confusing. In train_style.py(line 249-333), the optimization seems to miss the optimizer.zero_grad() before rgb_pred.backward(rgb_pred_grad). Please let me know if I misunderstand it.

@Kai-46
Copy link
Owner

Kai-46 commented Oct 12, 2022

That seems to be the case; you could add it to line 294 of train_style.py.

@chobao
Copy link
Author

chobao commented Oct 13, 2022

Thanks, it helps a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants