-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add input_tensor input type #2951
Conversation
Thanks for the PR. Why not just call .forward() with your tensor you have on hand though? Like instead of calling operator() you can call forward() and give that the tensor on device directly. |
As I understand, There is a larger issue though with training. The
I would have to develop my own trainer. For inference, the same situation applies. I will be streaming video frames from a camera connected to a Jetson system. Let me know if I am missing something, and thanks! |
Ah, didn't realize you wanted to train with it. Yeah this is cool, makes sense :D Can you add a short unit test to check that it works and then I'll merge it? |
Awesome. I added a unit test. Let me know what you think. |
Co-authored-by: pfeatherstone <pfeatherstone@pf>
Nice, thanks for the PR :) |
Of course. Btw, dlib is great! |
These changes add a new input type that enables feeding networks with batches of data that already reside in device memory.
For context, I am developing a robot simulator for batch reinforcement learning. A deep Q-network receives inputs generated from an OpenGL pipeline that renders a camera view representation of the world to a texture. This texture can be read using CUDA graphics interoperability. The experiences then accumulate in device memory. To avoid the round trip from host to device, I added this input layer.
I think the functionality could be useful beyond my application.