-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd #86245
Conversation
… aot autograd [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/86245
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 1 PendingAs of commit 67dd731: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… aot autograd ghstack-source-id: e59889bd05a2df14235ca9849ebe42e44601eb0f Pull Request resolved: #86245
This looks basically fine but deferring to @albanD for final review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me.
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
… aot autograd ghstack-source-id: a608e1cba5489ab9e28a9c5e511567f89bf827c1 Pull Request resolved: #86245
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
… aot autograd ghstack-source-id: 43cdfe558be6e25bc097c98e1858aa992619fb44 Pull Request resolved: #86245
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small error in doc, good to go otherwise!
docs/source/torch.rst
Outdated
@@ -268,7 +268,7 @@ Examples:: | |||
set_grad_enabled | |||
is_grad_enabled | |||
inference_mode | |||
is_inference_mode_enabled | |||
set_multithreading_enabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be reverted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, thanks
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
… aot autograd ghstack-source-id: c9480c096fe195b176079deeda239de7ca56d3b8 Pull Request resolved: #86245
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM!
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
… aot autograd ghstack-source-id: b22baf5bbc1d8b3e9ab80eb1ddabdb7306f3c19f Pull Request resolved: #86245
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
Successfully rebased |
… aot autograd ghstack-source-id: ba4a24c07c4028a214e19e37a4eea35ce34c57ae Pull Request resolved: #86245
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @eellison. |
… aot autograd (#86245) (#86245) Summary: We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. Pull Request resolved: #86245 Approved by: https://github.com/albanD, https://github.com/yf225 Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d04889323e2bc0b7321b76e564292565c88b9a5e Reviewed By: seemethere Differential Revision: D40167028 Pulled By: seemethere fbshipit-source-id: f427c71e528deaa494521a61fcbf789d1a964711
Stack from ghstack (oldest at bottom):
We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and
FakeTensorMode
getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity.