-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure that vmap is restored properly if an exception is thrown during frame eval #122074
Conversation
…g frame eval [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/122074
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (4 Unrelated Failures)As of commit 1434354 with merge base b6bcd09 ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…hrown during frame eval" [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. [ghstack-poisoned]
giving this to @zou3519 |
# Ensure that if an assertion occurs after graph pushes | ||
# something onto the DynamicLayerStack then we pop it off (the | ||
# constructed graph code isn't guarded with try/finally). | ||
with torch._C._functorch._PreserveDynamicLayerStack(): | ||
return fn(*args, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kind of confused at this one. Dynamo has a mechanism to "undo the context manager" if tracing fails. Are you saying that Dynamo traced a graph, passed it to the backend ("eager" in this case), and that failed somewhere in the middle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. graph_module.py is producing python code which looks like this:
def forward(self, L_fn_closure_0_cell_contents : torch.Tensor, L_inputs_0_ : torch.Tensor):
l_fn_closure_0_cell_contents = L_fn_closure_0_cell_contents
arg = L_inputs_0_
lazy_load_decompositions = torch._functorch.vmap.lazy_load_decompositions()
size = arg.size(0)
ne = size != size
_saved_tensors_hooks_disable = torch._C._autograd._saved_tensors_hooks_disable("torch.func transforms don\'t yet support saved tensor hooks. Please open an issue with your use case.")
_vmap_increment_nesting = torch._C._functorch._vmap_increment_nesting(3, \'error\')
_add_batch_dim = torch._C._functorch._add_batch_dim(arg, 0, 1); arg = None
sum_1 = _add_batch_dim.sum(0)
sum_2 = _add_batch_dim.sum(1); _add_batch_dim = None
add = sum_1 + sum_2; sum_1 = sum_2 = None
batched_output = add + l_fn_closure_0_cell_contents; add = l_fn_closure_0_cell_contents = None
actual = torch._C._functorch._remove_batch_dim(batched_output, 1, size, 0); batched_output = size = None
_vmap_decrement_nesting = torch._C._functorch._vmap_decrement_nesting()
_saved_tensors_hooks_enable = torch._C._autograd._saved_tensors_hooks_enable()
return (actual,)
the line
batched_output = add + l_fn_closure_0_cell_contents
is raising the error "TypeError: unsupported operand type(s) for +: 'Tensor' and 'function'"
Since the decrement isn't guarded with a try/finally it never happens and we end up out of sync.
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…hrown during frame eval" We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval. Repro: before: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.75s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ================== short test summary info =================== FAILED [0.0632s] test/test_sparse.py::TestSparseCPU::test_log1p_cpu_uint8 - AssertionError: "only Tensors of floating point dtype can require gradients" does not match "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function ... ======= 1 failed, 1 skipped, 8587 deselected in 10.99s ======= ``` (Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail) after: ``` $ rm test/dynamo_skips/TestSparseCPU.test_log1p_cpu_uint8 $ rm test/dynamo_expected_failures/FuncTorchHigherOrderOpTests.test_vmap_free_tensor $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_log1p_cpu_uint8' ============= 1 passed, 8588 deselected in 9.89s ============= $ PYTORCH_TEST_WITH_DYNAMO=1 pytest test/jit/test_sparse.py test/dynamo/test_dynamic_shapes.py test/inductor/test_torchinductor_dynamic_shapes.py test/test_sparse.py -k 'test_vmap_free_tensor_dynamic_shapes or test_log1p_cpu_uint8' ======= 1 passed, 1 skipped, 8587 deselected in 11.34s ======= ``` (test_vmap_free_tensor_dynamic_shapes passes either way) [ghstack-poisoned]
…g frame eval ghstack-source-id: ec14c1446f81d2b08b8005bfafe3b44587510749 Pull Request resolved: #122074
…g frame eval ghstack-source-id: 752f82729d6f16ff5eb621eed2d5065775a66295 Pull Request resolved: pytorch/pytorch#122074
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable to me, accepting to unblock. @zou3519 please comment if there's anything else you wanted to add
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 3 jobs have failed, first few of them are: trunk / win-vs2019-cpu-py3 / test (default, 1, 3, windows.4xlarge.nonephemeral), trunk / linux-focal-cuda12.1-py3.10-gcc9 / test (nogpu_AVX512, 1, 1, linux.2xlarge), trunk / linux-focal-cuda12.1-py3.10-gcc9 / test (nogpu_NO_AVX2, 1, 1, linux.2xlarge) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -f existing failures fixed by #125706 |
❌ 🤖 pytorchbot command failed:
Try |
@pytorchbot merge -f "existing failures fixed by #125706" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
The original change was about 9.5% slower than then backout. This improves it to be only about 1.41% slower than the backout. Fixes #126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with #122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` [ghstack-poisoned]
The original change was about 9.5% slower than then backout. This improves it to be only about 1.41% slower than the backout. Fixes #126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with #122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` ghstack-source-id: ecdcee8881a666a27530ce73f2c0d1b1276e7b20 Pull Request resolved: #126996
The original change was about 9.5% slower than then backout. This improves it to be only about 1.41% slower than the backout. Fixes #126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with #122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
The original change was about 9.5% slower than then backout. This improves it to be only about 1.41% slower than the backout. Fixes #126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with #122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` ghstack-source-id: 2342f889c59771845dd46ac5a6d1f3c1fe5d1d10 Pull Request resolved: #126996
The original change was about 9.5% slower than then before #122074 . This improves it to be only about 1.4% slower. Also touched up some unrelated nits that the linter complained about. Fixes #126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with #122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` Pull Request resolved: #126996 Approved by: https://github.com/jansel
The original change was about 9.5% slower than then before pytorch#122074 . This improves it to be only about 1.4% slower. Also touched up some unrelated nits that the linter complained about. Fixes pytorch#126293 Ran torchbench 3 times on each change. Perf values before (stable), after (fix), and with pytorch#122074 backed out (backout): ``` ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp first dynamic cpp stable: 43.948x 45.754x 44.906x fix: 47.505x 49.987x 47.493x backout: 48.243x 48.199x 48.192x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench pyhpc_equation_of_state amp first static default stable: 15.224x 13.286x 15.354x fix: 16.402x 16.370x 16.183x backout: 16.554x 16.675x 16.787x ../inductor-tools/scripts/modelbench/inductor_single_run.sh single inference performance torchbench lennard_jones float32 first static default stable: 1.712x 1.651x 1.640x fix: 1.804x 1.798x 1.792x backout: 1.864x 1.824x 1.836x ``` Pull Request resolved: pytorch#126996 Approved by: https://github.com/jansel
We save and restore the DynamicLayerStack during frame eval but since fx graph has no way to express a try/finally we just assume it will happen. If we throw an exception between the push and pop to the stack then we're left in a state that affects following operations poorly. Make sure that if it's in a bad state we restore it after frame eval.
Repro:
before:
(Note that adding test_vmap_free_tensor_dynamic_shapes causes test_vmap_free_tensor_dynamic_shapes to fail)
after:
(test_vmap_free_tensor_dynamic_shapes passes either way)
Stack from ghstack (oldest at bottom):
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang