-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
C++ CUDA Project in Windows - Second call to DNN Module::forward() throws exception #47590
Comments
@rtrahms , thank your feedback. Could you attach your example (.py and .pt) that I could reproduce it easily? |
Not a python file but a libtorch cpp file, and a supporting Detector class cpp/h files. Built into a MS Visual Studio 2019 project and linking with libtorch libraries. |
CUDA PT file for the network attached. Labels file as well. |
@rtarquini , thank your reply. could you share the image file |
It is just a standard JPG image loaded with OpenCV. You can use any image file really. The point is it shouldn't crash on feeding the image into forward() on the second pass. |
@rtrahms, I tried your example and it passed. The image I used is Lenna |
@rtrahms , I double checked with https://download.pytorch.org/libtorch/cu101/libtorch-win-shared-with-deps-debug-1.7.0%2Bcu101.zip. It passed too. |
Close this issue since it 's not a bug. @rtrahms , feel free to reopen it if you have any new findings |
@rtrahms What was the solution? |
馃悰 Bug
Environment/Setup
Using libtorch 1.7.0 / CUDA 10.1 / MS Visual Studio 2019
Using torchscript model pt file.
To Reproduce
To reproduce:
Confirm CUDA device is present by checking torch::cuda::is_available() and torch::cuda::device_count()
Load a model using the torch::jit::script::Module class and torch::jit::load() to specify the pt torchscript file to load.
Specify module.to(kCUDA) for CUDA DNN
Load an image and convert to a tensor:
half_ = true // CUDA FP16
cv::cvtColor(img_input, img_input, cv::COLOR_BGR2RGB); // BGR -> RGB
img_input.convertTo(img_input, CV_32FC3, 1.0f / 255.0f); // normalization 1/255
auto tensor_img = torch::from_blob(img_input.data, { 1, img_input.rows, img_input.cols, img_input.channels() }).to(kCUDA);
tensor_img = tensor_img.permute({ 0, 3, 1, 2 }).contiguous(); // BHWC -> BCHW (Batch, Channel, Height, Width)
if (half_) {
tensor_img = tensor_img.to(torch::kHalf);
}
std::vectortorch::jit::IValue inputs;
inputs.emplace_back(tensor_img);
Perform inference:
torch::jit::IValue output = module_.forward(inputs);
The first time this forward() call is made, it works. Any subsequent call will throw an exception. When using the kCPU version of this code, no exception is thrown.
Expected behavior
Once a module is loaded, repeated calls to Module::forward() should return without exception.
cc @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @smartcat2010 @mszhanyi @gmagogsfm
The text was updated successfully, but these errors were encountered: