Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force libtorch to use CUDA context #31565

Open
nachovall opened this issue Dec 23, 2019 · 1 comment
Open

Force libtorch to use CUDA context #31565

nachovall opened this issue Dec 23, 2019 · 1 comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@nachovall
Copy link

nachovall commented Dec 23, 2019

I’m trying to integrate C++ libtoch to load a model into my application. My application does many CUDA staff before I load the model with libtorch. Thus, the CUDA context has already been created.

For some reason, even the CUDA context has already been created and the calling thread has already a valid context, when I call

torch::jit::script::Module module = torch::jit::load(“test.pt”);
module.to(at::kCUDA);

a new context is created by libtorch. The new context is not event push to the stack of contexts, is overwritting the current context. I know because if after the module.to(at::kCUDA) I call cuCtxPopCurrent the current context is null.

This causes a lot of problems because I cannot interact with current allocated memory I have in my context.

#jit #cuda #c++

cc @suo

@albanD albanD added the oncall: jit Add this issue/PR to JIT oncall triage queue label Dec 23, 2019
@suo suo added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 28, 2020
@raimis
Copy link

raimis commented Aug 3, 2020

We are hitting the same context problem (openmm/openmm-torch#13), while trying to integrate PyTorch with OpenMM (https://github.com/openmm/openmm).

Would it be difficult to modify torch::jit::load to reuse an existing context? Or has somebody found a workaround?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants