New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug when dealing with fallbacks on CPU #105853
Comments
I was not able to repro the FallbackKernel out type bug with your code, but I was able to find the following (potentially related) bug: x = torch.tensor([-8.4784-1.7658j])
y = torch.tensor([-8.4784-1.7658j])
ans = torch.compile(torch.matmul)(x, y)
out = torch.empty_like(ans)
torch.compile(torch.matmul)(x, y, out=out)
torch.testing.assert_close(ans, out) # fails Succeeds: out = torch.compile(torch.matmul)(x, y)
torch.testing.assert_close(ans, out) # success Note to self: try |
@yf225 is this related to things you were looking at? |
Looks not a CPU specific issue. |
Tested with latest nightly build (2.2.0.dev20231209), neither of the issues reported in this thread is reproduced.
The snippet
finishes well without failure. |
馃悰 Describe the bug
To repro, patch in #105850, change the line
pytorch/test/test_linalg.py
Line 4313 in 3045e84
for
torch.compile(torch.matmul)(x, y)
, and runIt fails with the following traceback
It fails when creating a fallback for:
This is odd, as
sym_size
does have a lowering.Versions
master
cc @ezyang @anjali411 @dylanbespalko @mruberry @lezcano @nikitaved @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @Xia-Weiwen @ngimel
The text was updated successfully, but these errors were encountered: