Force libtorch to use CUDA context #31565
Labels
oncall: jit
Add this issue/PR to JIT oncall triage queue
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
I’m trying to integrate C++ libtoch to load a model into my application. My application does many CUDA staff before I load the model with libtorch. Thus, the CUDA context has already been created.
For some reason, even the CUDA context has already been created and the calling thread has already a valid context, when I call
torch::jit::script::Module module = torch::jit::load(“test.pt”);
module.to(at::kCUDA);
a new context is created by libtorch. The new context is not event push to the stack of contexts, is overwritting the current context. I know because if after the module.to(at::kCUDA) I call cuCtxPopCurrent the current context is null.
This causes a lot of problems because I cannot interact with current allocated memory I have in my context.
#jit #cuda #c++
cc @suo
The text was updated successfully, but these errors were encountered: