Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for torch.compile(numpy) #109710

Closed
wants to merge 8 commits into from

Conversation

lezcano
Copy link
Collaborator

@lezcano lezcano commented Sep 20, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 20, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/109710

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c528d51 with merge base 2c1554a (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: e0754eda44cb383bf76319726078ac6e0d2e8aa4
Pull Request resolved: #109710
@lezcano lezcano assigned ev-br and msaroufim and unassigned ev-br and msaroufim Sep 20, 2023
@lezcano lezcano added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module topic: docs topic category labels Sep 20, 2023
Copy link
Collaborator

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Several nits from the hairsplitter gallery, feel free to ignore.

docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: 879b21b65f541060e973c11fef20dfe7cab0dc43
Pull Request resolved: #109710

- Esoteric ufunc machinery like ``axes=[(n,k),(k,m)->(n,m)]`` and ufunc methods (e.g., ``np.add.reduce``).

- Fortran ordered arrays and, in general, any ``order=`` different to ``C``.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(sorry, did not catch it on the first read).
Strictly speaking, some functions have defaults different from 'C' ('A' or 'K'). This does not matter in practice, and we graph break / raise NotImplementedError for non-default order arguments.

Maybe just say any non-default values of the order= arguments?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just going to remove this line as we also graphbreak on a few other inputs and it'd be a pain to list them all.

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: 619ce0282316e24bcf26a0664315652c39958dcd
Pull Request resolved: #109710
Which NumPy features does ``torch.compile`` support?
----------------------------------------------------

NumPy within ``torch.compile`` follows the latest NumPy release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have some way of getting the version of NumPy that torch.compile works / is tested against?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. This is more of something we would like to have, and a rule for people to submit issues / for us to answer issues.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, can you file an issue to follow-up on this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is no issue to follow up on here. I think we should say that we support the last NumPy release, or I can change it to "the same NumPy release that PyTorch supports". I don't think we have any way to get this, but I remember it's written somewhere. @albanD knows where IIRC. I can change the wording and put a link to the relevant place where we track this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is the following:

    1. clearly we are encoding some NumPy behavior based on some version of NumPy, and as written it seems like this can differ from the installed version of NumPy.
    1. "Latest" as a descriptor isn't super helpful for the user - they could be using an old version of PyTorch or NumPy could have just updated between PyTorch releases.
    1. If we know what 1. is, we should just tell the user (in code!) so they can check and do what they will with that information, such as installing a NumPy version to match or upgrading PyTorch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For iii, do we currently have a way to retrieve the NumPy version we support when using it in eager or when running the tests?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lezcano isn't the north star story here be that PyTorch runtime should match the behavior of the currently installed numpy version (with obviously only a subset of versions being properly supported and some leading to nice unsupported errors).
As of today we're basically targetting a given version of Numpy 2.X right? And I would agree that we should make sure that we have CI running with it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, at the moment we are implementing the last version of NumPy (pretty much, modulo that NEP50 point). Then I guess we'll be able to add support for NumPy 2.0 (while keeping support for 1.X as well) once we support it in core.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good. I would just not say "the last version of NumPy" but "the Numpy 2.0 pre-release" so that it doesn't become stale if we don't update it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes complete sense. Updating

array. If this is not possible, one may often recover the original behavior
by casting the NumPy scalar to the relevant Python scalar type ``bool/int/float``.

- Negative strides: ``np.flip`` and slicing with a negative step return a copy.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a question for this documentation, but why don't we fall back in this case?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could, but we found that the pattern x[::-1].fn() where fn is an out-of-place op is relatively common (it's in torchbench even) so I think it's worth this small dissonance. If people don't agree, we can always reconsider and fall back, or even consider implementing this ourselves, as inductor does support negative strides.

The best place to start is the
`tutorial with general advice for how to debug this sort of torch.compile issues <https://pytorch.org/docs/main/torch.compiler_faq.html#why-am-i-not-seeing-speedups>`__.

Some graph breaks may happen because of the use of unsupported features. See
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: both of these features exist in PT as well, so I don't see why this advice should exist in NumPy-specific portion of the documentation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intuition behind this (and something that I tried to follow both in post and the docs) is that the reader may be someone that comes from NumPy and does not necessarily have too much PyTorch experience. As such, I tried to crossref quite a bit and avoid assuming too much PyTorch knowledge.

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: b56d512939aeb0cac4c69be0f5be7ab22c77f4b9
Pull Request resolved: #109710
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
docs/source/torch.compiler_faq.rst Show resolved Hide resolved
array. If this is not possible, one may often recover the original behavior
by casting the NumPy scalar to the relevant Python scalar type ``bool/int/float``.

- Negative strides: ``np.flip`` and slicing with a negative step return a copy.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider inverting

Suggested change
- Negative strides: ``np.flip`` and slicing with a negative step return a copy.
- Negative strides: views that result in negative strides will instead copy, e.g. ``np.flip`` or slicing with a negative step.

docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved
@lezcano
Copy link
Collaborator Author

lezcano commented Sep 20, 2023

@peterbell10 addressed the review

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: a332f4d1a6df161589397a022d4363c3de4ab0b5
Pull Request resolved: #109710
docs/source/torch.compiler_faq.rst Outdated Show resolved Hide resolved

If the program does work when importing ``torch._numpy as np``, chances are
that the bug is in TorchDynamo. If this is the case, please feel open an issue
with a minimal reproducer.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds worth checking. If it doesn't then open an issue?

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: 2e11fc9851d479802c057e3cab8ec31c17d33b9a
Pull Request resolved: #109710
lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: 5cdc70a3e5017b77c59d07e29353a99b81d31dd2
Pull Request resolved: #109710
@lezcano
Copy link
Collaborator Author

lezcano commented Sep 20, 2023

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 20, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

lezcano added a commit that referenced this pull request Sep 20, 2023
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e
Pull Request resolved: #109710
@lezcano lezcano added module: numpy Related to numpy support, and also numpy compatibility of our operators module: dynamo release notes: dynamo labels Sep 20, 2023
@lezcano
Copy link
Collaborator Author

lezcano commented Sep 20, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

lezcano added a commit that referenced this pull request Sep 21, 2023
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e
Pull Request resolved: #109710
atalman pushed a commit that referenced this pull request Sep 21, 2023
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e
Pull Request resolved: #109710
@facebook-github-bot facebook-github-bot deleted the gh/Lezcano/239/head branch September 24, 2023 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: dynamo module: numpy Related to numpy support, and also numpy compatibility of our operators open source release notes: dynamo topic: docs topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants