New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow more inserts before reIndexTopology #102312
Conversation
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/102312
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 91ce1e3: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py ghstack-source-id: 80b872aa1bf2410ba6418d38aa66ac9906a16496 Pull Request resolved: #102312
@zdevito has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice !! This works only with the setInsertPoint
api and not with n1->insertBefore(n2)
... The former is the common idiom and i dont know how to fix the second anyway.
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py Differential Revision: [D46206617](https://our.internmc.facebook.com/intern/diff/D46206617) [ghstack-poisoned]
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py ghstack-source-id: 73164b5b99b888dabcc4344ab10feea18ccb3986 Pull Request resolved: #102312
@zdevito has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot land |
❌ 🤖 pytorchbot command failed:
Try |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py Differential Revision: [D46206617](https://our.internmc.facebook.com/intern/diff/D46206617) Pull Request resolved: pytorch#102312 Approved by: https://github.com/eellison
Summary: Currently if you are inserting into JIT IR at the same point in the middle of the graph, it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can lead to slow load times. This changes it so that it keeps track of how many insertions happen at single point (like when a function is being inlined) to predict how many future insertions will happen there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions. In practice this will allow around 2M inserts at a single point before it reindexes. Test Plan: test_jit.py Differential Revision: [D46206617](https://our.internmc.facebook.com/intern/diff/D46206617) Pull Request resolved: pytorch#102312 Approved by: https://github.com/eellison
Stack from ghstack (oldest at bottom):
Summary:
Currently if you are inserting into JIT IR at the same point in the middle of the graph,
it only allows for 40 inserts before it has to reindex. Reindexing is N**2 behavior, which can
lead to slow load times. This changes it so that it keeps track of how many insertions happen
at single point (like when a function is being inlined) to predict how many future insertions will happen
there. It then adjusts how it assigns topology to make sure there is enough room for those predicted insertions.
In practice this will allow around 2M inserts at a single point before it reindexes.
Test Plan: test_jit.py
Differential Revision: D46206617