-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error building k2 v1.13 with pytorch:22.01 #916
Comments
I suspect it's a mismatch between the CUDA on your path, which is a system-installed CUDA, and whatever that version of PyTorch was intended to be used with. I believe when we include PyTorch we also get CUDA headers, including those of cub, and this can cause problems if we don't have the exact same version as the NVCC we are using. |
Could you provide some information about the container, e.g.
|
(Edit) More info on https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-01.html |
I will try to use torch 1.11.0 + CUDA 11.5 to reproduce your issue and try to fix it. |
@csukuangfj I just tried the torch 1.11.0 + CUDA 11.5 combination in pytorch:21.12 (previous container). To reproduce the issue:
|
I'm trying to build k2 v1.13 from source (
python3 setup.py install
) inside of pytorch:22.01 container (nvcr.io/nvidia/pytorch:22.01-py3
) and I get the following error when compilingmutual_information.cu
:For the same setup, k2 v1.11 build succeeded.
P.S. sorry if the issue is not related to k2.
The text was updated successfully, but these errors were encountered: