New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dynamo] np.sort(complex) is not implemented #107710
Conversation
This issue was discovered once we were able to trace without breaking in #107689. Same for the next one. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/107710
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 900b37c with merge base ba5eeed (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Funnily, related: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional: can also add same check to searchsorted.
I suppose we should either always return NotImplemented or always raise NotImplementedError? (Here and in normalize_not_implemented). As of this PR, using the order kwarg raises, but sorting a complex array returns.
# `order` keyword arg is only relevant for structured dtypes; so not supported here. | ||
a, axis, stable = _sort_helper(a, axis, kind, order) | ||
result = torch.sort(a, dim=axis, stable=stable) | ||
return result.values | ||
|
||
|
||
def argsort(a: ArrayLike, axis=-1, kind=None, order: NotImplementedType = None): | ||
if a.dtype.is_complex: | ||
return NotImplemented |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not: Can move to _sort_helper to remove duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but would need to return somehow from the main function, so I didn't really know how to do it uniformly. I figured we could always just raise a NotImplementedError
and then catch it in the normaliser and then return...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that, a the moment, we always raise, we should probably go with the normaliser approach.
The point as to why doing the return rather than the exception is that dynamo already knows how to treat a function that returns NotImplemented
, falling back gracefully to eager.
This issue was discovered once we were able to trace without breaking in #107689. Same for the next one. cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
Good point. Added searchsorted. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This issue was discovered once we were able to trace without breaking in #107689. Same for the next one. cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
This issue was discovered once we were able to trace without breaking in #107689. Same for the next one. cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
Gather does the same thing, but it's much better supported in the `torch.compile` stack Pull Request resolved: #107711 Approved by: https://github.com/ezyang ghstack dependencies: #107687, #107688, #107710
Follow-up to #107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes #107970 Pull Request resolved: #107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Follow-up to pytorch#107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes pytorch#107970 Pull Request resolved: pytorch#107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Follow-up to pytorch#107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes pytorch#107970 Pull Request resolved: pytorch#107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Follow-up to pytorch#107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes pytorch#107970 Pull Request resolved: pytorch#107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Follow-up to pytorch#107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes pytorch#107970 Pull Request resolved: pytorch#107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Follow-up to pytorch#107710: Help dynamo fall back to eager when compiling unimplemented numpy constructs: - arrays of strings - (arg){min, max} for complex types - various arguments typed as NotImplemented (`np.ones(4, order="F")` etc) - numpy functions which torch._numpy does not implement To test, run (we do not implement arrays of strings) ``` import torch import numpy as np @torch.compile(fullgraph=False) def fn(): return np.asarray(["L", "U"]) ``` and observe it compiles with fullgraph=False and fails with fullgraph=True Fixes pytorch#107970 Pull Request resolved: pytorch#107863 Approved by: https://github.com/ezyang, https://github.com/lezcano
Stack from ghstack (oldest at bottom):
This issue was discovered once we were able to trace without breaking
in #107689. Same for the next
one.
cc @mruberry @rgommers @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov