New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FakeTensor] Workaround FFT ops with incorrect meta strides #106319
Conversation
This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106319
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2bde2b4: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. ghstack-source-id: 0bf9c52c04d54284ce4074c6b98245c38f94e507 Pull Request resolved: #106319
This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. ghstack-source-id: 0bf9c52c04d54284ce4074c6b98245c38f94e507 Pull Request resolved: pytorch#106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. ghstack-source-id: c36c1e121f44133b6ba8c68d72e8efeb6ccae716 Pull Request resolved: #106319
This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. ghstack-source-id: c36c1e121f44133b6ba8c68d72e8efeb6ccae716 Pull Request resolved: pytorch#106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: 0af17f168826fc8611610a815f97935aa20065b7 Pull Request resolved: #106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: 6e031cb1ead241fa69eb6ee4165867a32b8bf557 Pull Request resolved: #106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: bdc1f8d5db42b4c5ddaef34aaf75acf27db6bc15 Pull Request resolved: #106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: 5c464cf2c06cee99abea9db060dca1534ebd83be Pull Request resolved: pytorch#106319
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
…trides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
…ides" This registers a custom fake tensor implemantion for the core fft ops which takes the strides from eager where possible, or otherwise uses unbacked symints to represent the strides. [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you planning to also fix the meta?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought these were fixed here. #103616 Or at least, for the opinfos the fft ops now pass stride checks. cc @nkaretnikov
Is this still necessary ?
@eellison that PR only touches complex-to-complex transforms, with real-to-complex and complex-to-real still being incorrect. That includes calling Regarding #103616, this PR doesn't apply to Another option that would trade of some runtime performance would be to have inductor use the out variant for these ops. That does cost an extra copy, but at least it would never fail the stride check. |
We do have existing precedent for device-specific logic in meta kernels, e.g. here. It doesn't need to be in fake tensor. Ultimately we should fix and add the registrations in meta_registrations. I know it would be a pain to replicate but perhaps a little bit easier with the aid of chat-gpt. Filed #106622 for not having sufficient opinfos to exercise the four dim case. cc @nkaretnikov |
Also, I would have expected this pr to fix https://github.com/pytorch/pytorch/blob/main/test/inductor/test_torchinductor_opinfo.py#L240-L257... |
Hrm you're right, those tests pass locally whether I put the xfail there or not. Looks like xpass is broken. |
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: 9056fd766cb7ff6b6243d38589eb0fc937d0c203 Pull Request resolved: pytorch#106319
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: 9056fd766cb7ff6b6243d38589eb0fc937d0c203 Pull Request resolved: pytorch#106319
Currently there are FFT operators which raise `UnsupportedOperatorException` because their meta implementations sometimes give incorrect strides. This works around the problem for static shapes by falling back to eager. Though we still don't support calls with dynamic shapes. [ghstack-poisoned]
FFT ops meta implementations don't have the correct strides, so currently FakeTensor just raises we need to fall back to eager. ghstack-source-id: df7a43236c27ccf5b7ebeaf1248d8dd03b48e58d Pull Request resolved: pytorch#106319
This allows symbolic shapes to be traced through `torch.stft` and `torch.istft`. Pull Request resolved: #106400 Approved by: https://github.com/lezcano ghstack dependencies: #106319
This test raises an error inside the test when an xfailed test succeeds, but is also decorated with the xfail decorator which converts the error to an xfail. Instead, this lets the test function pass normally and lets the xfail decorator raise "Unexpected success". I also updated the COLLECT_EXPECT code and run it to get the updated set of failures. Pull Request resolved: #106631 Approved by: https://github.com/lezcano ghstack dependencies: #106319, #106400
…106319) Currently there are FFT operators which raise `UnsupportedOperatorException` because their meta implementations sometimes give incorrect strides. This works around the problem for static shapes by falling back to eager. Though we still don't support calls with dynamic shapes. Pull Request resolved: pytorch#106319 Approved by: https://github.com/ezyang
…6400) This allows symbolic shapes to be traced through `torch.stft` and `torch.istft`. Pull Request resolved: pytorch#106400 Approved by: https://github.com/lezcano ghstack dependencies: pytorch#106319
This test raises an error inside the test when an xfailed test succeeds, but is also decorated with the xfail decorator which converts the error to an xfail. Instead, this lets the test function pass normally and lets the xfail decorator raise "Unexpected success". I also updated the COLLECT_EXPECT code and run it to get the updated set of failures. Pull Request resolved: pytorch#106631 Approved by: https://github.com/lezcano ghstack dependencies: pytorch#106319, pytorch#106400
Stack from ghstack (oldest at bottom):
Currently there are FFT operators which raise
UnsupportedOperatorException
because their meta implementations sometimes give incorrect strides. This works
around the problem for static shapes by falling back to eager. Though we still
don't support calls with dynamic shapes.