Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA Error: no kernel image is available for execution on the device (209) /tmp/build-via-sdist-nl8se4dx/flashinfer-0.0.4+cu118torch2.2/include/flashinfer/attention/decode.cuh: line 871 at function cudaFuncSetAttribute(kernel, cudaFuncAttributeMaxDynamicSharedMemorySize, smem_size) #249

Open
lucasjinreal opened this issue May 16, 2024 · 2 comments

Comments

@lucasjinreal
Copy link

Does the cudnn can not be v9?

@yzh119
Copy link
Collaborator

yzh119 commented May 16, 2024

This is due to CUDA version mismatch, what's the CUDA version (and PyTorch CUDA version) on your device?

nvidia-smi
python -c "import torch; print(torch.version.cuda)"

@lucasjinreal
Copy link
Author

Am using cuda118 ,

>>> import torch
torch>>> torch.__version__
'2.2.0+cu118'

installed with pip install flashinfer -i https://flashinfer.ai/whl/cu118/torch2.2/
my torch works ok (even though torch now using their own nvidia toolkit through pypi)

Doe sthey any miniml func can be test why the cuda is mismatch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants