Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

load_inline should always recompile a kernel if it failed #119206

Open
msaroufim opened this issue Feb 5, 2024 · 0 comments
Open

load_inline should always recompile a kernel if it failed #119206

msaroufim opened this issue Feb 5, 2024 · 0 comments
Labels
module: cpp-extensions Related to torch.utils.cpp_extension triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@msaroufim
Copy link
Member

msaroufim commented Feb 5, 2024

馃悰 Describe the bug

Repro in the real world here https://github.com/cuda-mode/lectures/blob/main/lecture3/pmpp.ipynb

If you try to load a kernel without ninja the load will fail but then if you pip install ninja and try to reload the kernel it will still fail. So instead I've noticed users are adding an ; in source code to force a recompilation but that doesn't seem great

Versions

A standard google colab instance

cc @malfet @zou3519

@msaroufim msaroufim added the module: cpp-extensions Related to torch.utils.cpp_extension label Feb 5, 2024
@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cpp-extensions Related to torch.utils.cpp_extension triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants