Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch, Cuda and cuDNN version ? #2

Closed
ghost opened this issue Jan 27, 2022 · 2 comments
Closed

Pytorch, Cuda and cuDNN version ? #2

ghost opened this issue Jan 27, 2022 · 2 comments

Comments

@ghost
Copy link

ghost commented Jan 27, 2022

Nice work here ...

Can I ask what's the version Pytorch, Cuda and cuDNN version you're using ?

I have faced following issues and was wondering if is it due to compatibility issues ?

I am testing in Pytorch 1.10. running on Cuda 10.2 cuDNN 7. No issue faced running with Python.

erminate called after throwing an instance of 'std::runtime_error'
  what():  The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/torchvision/models/resnet/___torch_mangle_25.py", line 6, in forward
  def forward(self: __torch__.torchvision.models.resnet.___torch_mangle_25.ResNet,
    x: Tensor) -> Tensor:
    _0 = torch.cudnn_convolution_relu(x, CONSTANTS.c0, CONSTANTS.c1, [2, 2], [3, 3], [1, 1], 1)
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    x0 = torch.max_pool2d(_0, [3, 3], [2, 2], [1, 1])
    _1 = torch.cudnn_convolution_relu(x0, CONSTANTS.c2, CONSTANTS.c3, [1, 1], [1, 1], [1, 1], 1)

Traceback of TorchScript, original code (most recent call last):

    graph(%input, %weight, %bias, %stride:int[], %padding:int[], %dilation:int[], %groups:int):
        %res = aten::cudnn_convolution_relu(%input, %weight, %bias, %stride, %padding, %dilation, %groups)
               ~~~~ <--- HERE
        return (%res)
RuntimeError: cuDNN filters (a.k.a. weights) must be contiguous in desired memory_format

Thanks.

@viig99
Copy link
Contributor

viig99 commented Jan 28, 2022

I am using a built version Pytorch master- 1.10, cuda 11.5 and cuddn 8.3.1

@viig99
Copy link
Contributor

viig99 commented Jan 28, 2022

The model_resources folder have the traced modules based on my machine configuration, you can trace one based on yours using python3 optimize_model_for_inference.py will add this to the README.md.

@ghost ghost closed this as completed Jan 28, 2022
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant