-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do you plan to have a python wrapper for the fully fused MLP? #14
Comments
Related to NVlabs/instant-ngp#8. Summary: yes, we plan to have a Python wrapper for both the fully fused MLP and the hash grid encoding. Colleagues of ours have internally built a functional (but not quite ready) PyTorch wrapper around tiny-cuda-nn, which we hope to release soonish. Based on experience so far, there will be a slowdown compared to the native C++ API, but still significantly faster than Python-native MLPs. |
Sounds great and looking forward to that. Both ideas are really cool, and I am in particular like the fully fused version of MLP. It should be definitely much faster especially for small hidden sizes!! It would also be really grateful if you can point me the file and lines for both the “fully fused mlp” and “hashtable encoding”. I can check the current implementation to better understand, and maybe have a simple try of binding it in the weekend. |
Gladly! Those would be
|
I just pushed a first version (call it "beta") for a PyTorch extension. See this section of the README for installation/usage instructions and please do report problems you encounter along the way. :) |
Hi, thanks for releasing the pytorch binding! I have tried a bit, compiled and ran the "mlp_learning_an_image_pytorch.py" script. It returned the following error.
Do you have any clue what might be the problem? Thanks It seems only failed for when writing we are trying to write the image under "with torch.no_grad():". |
@MultiPath should be fixed now! Can you try again? |
It is working now on my side both with and without |
Confirm it works now. Thanks |
Hi, I am not an expert on cuda coding but have more experience on pytorch/tensorflow...
Do you have any plans to have this code with a python (more specifically pytorch) wrapper?
Or will it be possible to point the location for forward/backward function of this MLP implementation so that we can potentially incorporate this into other python code?
Thanks a lot
The text was updated successfully, but these errors were encountered: