New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for torch.compile(numpy) #109710
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/109710
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c528d51 with merge base 2c1554a (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: e0754eda44cb383bf76319726078ac6e0d2e8aa4 Pull Request resolved: #109710
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Several nits from the hairsplitter gallery, feel free to ignore.
[ghstack-poisoned]
ghstack-source-id: 879b21b65f541060e973c11fef20dfe7cab0dc43 Pull Request resolved: #109710
docs/source/torch.compiler_faq.rst
Outdated
|
||
- Esoteric ufunc machinery like ``axes=[(n,k),(k,m)->(n,m)]`` and ufunc methods (e.g., ``np.add.reduce``). | ||
|
||
- Fortran ordered arrays and, in general, any ``order=`` different to ``C``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(sorry, did not catch it on the first read).
Strictly speaking, some functions have defaults different from 'C' ('A' or 'K'). This does not matter in practice, and we graph break / raise NotImplementedError for non-default order arguments.
Maybe just say any non-default values of the order=
arguments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just going to remove this line as we also graphbreak on a few other inputs and it'd be a pain to list them all.
[ghstack-poisoned]
ghstack-source-id: 619ce0282316e24bcf26a0664315652c39958dcd Pull Request resolved: #109710
docs/source/torch.compiler_faq.rst
Outdated
Which NumPy features does ``torch.compile`` support? | ||
---------------------------------------------------- | ||
|
||
NumPy within ``torch.compile`` follows the latest NumPy release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have some way of getting the version of NumPy that torch.compile works / is tested against?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope. This is more of something we would like to have, and a rule for people to submit issues / for us to answer issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, can you file an issue to follow-up on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is no issue to follow up on here. I think we should say that we support the last NumPy release, or I can change it to "the same NumPy release that PyTorch supports". I don't think we have any way to get this, but I remember it's written somewhere. @albanD knows where IIRC. I can change the wording and put a link to the relevant place where we track this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean is the following:
-
- clearly we are encoding some NumPy behavior based on some version of NumPy, and as written it seems like this can differ from the installed version of NumPy.
-
- "Latest" as a descriptor isn't super helpful for the user - they could be using an old version of PyTorch or NumPy could have just updated between PyTorch releases.
-
- If we know what 1. is, we should just tell the user (in code!) so they can check and do what they will with that information, such as installing a NumPy version to match or upgrading PyTorch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For iii, do we currently have a way to retrieve the NumPy version we support when using it in eager or when running the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lezcano isn't the north star story here be that PyTorch runtime should match the behavior of the currently installed numpy version (with obviously only a subset of versions being properly supported and some leading to nice unsupported errors).
As of today we're basically targetting a given version of Numpy 2.X right? And I would agree that we should make sure that we have CI running with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, at the moment we are implementing the last version of NumPy (pretty much, modulo that NEP50 point). Then I guess we'll be able to add support for NumPy 2.0 (while keeping support for 1.X as well) once we support it in core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good. I would just not say "the last version of NumPy" but "the Numpy 2.0 pre-release" so that it doesn't become stale if we don't update it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes complete sense. Updating
array. If this is not possible, one may often recover the original behavior | ||
by casting the NumPy scalar to the relevant Python scalar type ``bool/int/float``. | ||
|
||
- Negative strides: ``np.flip`` and slicing with a negative step return a copy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a question for this documentation, but why don't we fall back in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but we found that the pattern x[::-1].fn()
where fn
is an out-of-place op is relatively common (it's in torchbench even) so I think it's worth this small dissonance. If people don't agree, we can always reconsider and fall back, or even consider implementing this ourselves, as inductor does support negative strides.
The best place to start is the | ||
`tutorial with general advice for how to debug this sort of torch.compile issues <https://pytorch.org/docs/main/torch.compiler_faq.html#why-am-i-not-seeing-speedups>`__. | ||
|
||
Some graph breaks may happen because of the use of unsupported features. See |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: both of these features exist in PT as well, so I don't see why this advice should exist in NumPy-specific portion of the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intuition behind this (and something that I tried to follow both in post and the docs) is that the reader may be someone that comes from NumPy and does not necessarily have too much PyTorch experience. As such, I tried to crossref quite a bit and avoid assuming too much PyTorch knowledge.
[ghstack-poisoned]
ghstack-source-id: b56d512939aeb0cac4c69be0f5be7ab22c77f4b9 Pull Request resolved: #109710
array. If this is not possible, one may often recover the original behavior | ||
by casting the NumPy scalar to the relevant Python scalar type ``bool/int/float``. | ||
|
||
- Negative strides: ``np.flip`` and slicing with a negative step return a copy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider inverting
- Negative strides: ``np.flip`` and slicing with a negative step return a copy. | |
- Negative strides: views that result in negative strides will instead copy, e.g. ``np.flip`` or slicing with a negative step. |
[ghstack-poisoned]
@peterbell10 addressed the review |
ghstack-source-id: a332f4d1a6df161589397a022d4363c3de4ab0b5 Pull Request resolved: #109710
docs/source/torch.compiler_faq.rst
Outdated
|
||
If the program does work when importing ``torch._numpy as np``, chances are | ||
that the bug is in TorchDynamo. If this is the case, please feel open an issue | ||
with a minimal reproducer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds worth checking. If it doesn't then open an issue?
[ghstack-poisoned]
ghstack-source-id: 2e11fc9851d479802c057e3cab8ec31c17d33b9a Pull Request resolved: #109710
[ghstack-poisoned]
ghstack-source-id: 5cdc70a3e5017b77c59d07e29353a99b81d31dd2 Pull Request resolved: #109710
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
[ghstack-poisoned]
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e Pull Request resolved: #109710
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e Pull Request resolved: #109710
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e Pull Request resolved: #109710
Stack from ghstack (oldest at bottom):
cc @mruberry @rgommers @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng