Skip to content

Conversation

@peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Oct 30, 2023

On my machine, `pytree.LeafSpec()` takes ~600ns but since every leaf spec is the
same, we can just use a global constant.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 30, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112392

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e44f847 with merge base 29844ad (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Oct 30, 2023
On my machine, `pytree.LeafSpec()` takes ~600ns but since every leaf spec is the
same, we can just use a global constant.

ghstack-source-id: 05cff54
Pull Request resolved: pytorch#112392
@peterbell10 peterbell10 requested a review from lezcano October 30, 2023 16:21
@peterbell10 peterbell10 marked this pull request as ready for review October 30, 2023 16:21
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine, but I reckon that we shouldn't invest that much time in optimising this implementation, as we are working towards moving to optree. See the stack #112110 and more generally the work of the author of those PRs

@peterbell10
Copy link
Collaborator Author

@XuehaiPan I see that some parts of the codebase already use optree as an optional dependency. Is the plan to keep it as optional, meaning the python implementation would still be relevant?

@zou3519
Copy link
Contributor

zou3519 commented Oct 30, 2023

In the short to medium term, both the python and C++ pytree (optree) will be relevant. Open question for the longer term (we likely will need to keep around the python pytree implementation so that Dynamo can trace through it)

@peterbell10
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 30, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@peterbell10
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Oct 31, 2023
#112393)

We commonly do some variation of `tree_leaves((args, kwargs))`. This adds a new
function `arg_tree_leaves(*args, **kwargs)` which takes advantage of the known
structure of `args` and `kwargs` to skip their `flatten_fn`.

I see ~1 us improvement per call for args + kwargs, or a 0.5 us improvement
when passing just one of `args` or `kwargs`. For shallow structures, this can be
proportionally quite significant. For example, the empty_strided call I've been
using as a benchmark:
```
args = ((100, 100), (100, 1))
kwargs = dict(device="cuda")
```
Sees a 30% speedup from this.

Pull Request resolved: #112393
Approved by: https://github.com/lezcano
ghstack dependencies: #112391, #112392
pytorchmergebot pushed a commit that referenced this pull request Oct 31, 2023
Pull Request resolved: #112394
Approved by: https://github.com/lezcano
ghstack dependencies: #112391, #112392, #112393
pytorchmergebot pushed a commit that referenced this pull request Oct 31, 2023
Wherever we discard the output of `tree_map` it's better to call `tree_map_`
which doesn't unflatten the mapped results and so is a lot cheaper.
Pull Request resolved: #112417
Approved by: https://github.com/lezcano
ghstack dependencies: #112391, #112392, #112393, #112394
@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/647/head branch November 3, 2023 14:27
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
…12392)

On my machine, `pytree.LeafSpec()` takes ~600ns but since every leaf spec is the
same, we can just use a global constant.

Pull Request resolved: pytorch#112392
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
pytorch#112393)

We commonly do some variation of `tree_leaves((args, kwargs))`. This adds a new
function `arg_tree_leaves(*args, **kwargs)` which takes advantage of the known
structure of `args` and `kwargs` to skip their `flatten_fn`.

I see ~1 us improvement per call for args + kwargs, or a 0.5 us improvement
when passing just one of `args` or `kwargs`. For shallow structures, this can be
proportionally quite significant. For example, the empty_strided call I've been
using as a benchmark:
```
args = ((100, 100), (100, 1))
kwargs = dict(device="cuda")
```
Sees a 30% speedup from this.

Pull Request resolved: pytorch#112393
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391, pytorch#112392
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
Wherever we discard the output of `tree_map` it's better to call `tree_map_`
which doesn't unflatten the mapped results and so is a lot cheaper.
Pull Request resolved: pytorch#112417
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391, pytorch#112392, pytorch#112393, pytorch#112394
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
…12392)

On my machine, `pytree.LeafSpec()` takes ~600ns but since every leaf spec is the
same, we can just use a global constant.

Pull Request resolved: pytorch#112392
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
pytorch#112393)

We commonly do some variation of `tree_leaves((args, kwargs))`. This adds a new
function `arg_tree_leaves(*args, **kwargs)` which takes advantage of the known
structure of `args` and `kwargs` to skip their `flatten_fn`.

I see ~1 us improvement per call for args + kwargs, or a 0.5 us improvement
when passing just one of `args` or `kwargs`. For shallow structures, this can be
proportionally quite significant. For example, the empty_strided call I've been
using as a benchmark:
```
args = ((100, 100), (100, 1))
kwargs = dict(device="cuda")
```
Sees a 30% speedup from this.

Pull Request resolved: pytorch#112393
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391, pytorch#112392
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
Wherever we discard the output of `tree_map` it's better to call `tree_map_`
which doesn't unflatten the mapped results and so is a lot cheaper.
Pull Request resolved: pytorch#112417
Approved by: https://github.com/lezcano
ghstack dependencies: pytorch#112391, pytorch#112392, pytorch#112393, pytorch#112394
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: pytree open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants