-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Fix internal failure D53291154 #119351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Fix internal failure D53291154 #119351
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Fix internal failure D53291154 from alban: the change is breaking because the alpha argument is now kwarg only (via the * marker) while it was ok for it to be positional before for the rsub.Scalar overload ``` _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "torch/_dynamo/eval_frame.py", line 453, in _fn return fn(*args, **kwargs) File "torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "torch/_dynamo/eval_frame.py", line 615, in catch_errors return callback(frame, cache_entry, hooks, frame_state) File "torch/_dynamo/convert_frame.py", line 390, in _convert_frame_assert return _compile( File "python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "torch/_dynamo/convert_frame.py", line 650, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "torch/_dynamo/utils.py", line 248, in time_wrapper r = func(*args, **kwargs) File "torch/_dynamo/convert_frame.py", line 531, in compile_inner out_code = transform_code_object(code, transform) File "torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object transformations(instructions, code_options) File "torch/_dynamo/convert_frame.py", line 155, in _fn return fn(*args, **kwargs) File "torch/_dynamo/convert_frame.py", line 496, in transform tracer.run() File "torch/_dynamo/symbolic_convert.py", line 2125, in run super().run() File "torch/_dynamo/symbolic_convert.py", line 787, in run and self.step() File "torch/_dynamo/symbolic_convert.py", line 750, in step getattr(self, inst.opname)(inst) File "torch/_dynamo/symbolic_convert.py", line 469, in wrapper return inner_fn(self, inst) File "torch/_dynamo/symbolic_convert.py", line 1249, in CALL_FUNCTION_KW self.call_function(fn, args, kwargs) File "torch/_dynamo/symbolic_convert.py", line 651, in call_function self.push(fn.call_function(self, args, kwargs)) File "torch/_dynamo/variables/torch.py", line 614, in call_function tensor_variable = wrap_fx_proxy( File "torch/_dynamo/variables/builder.py", line 1285, in wrap_fx_proxy return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs) File "torch/_dynamo/variables/builder.py", line 1370, in wrap_fx_proxy_cls example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True) File "torch/_dynamo/utils.py", line 1653, in get_fake_value raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None File "torch/_dynamo/utils.py", line 1599, in get_fake_value ret_val = wrap_fake_exception( File "torch/_dynamo/utils.py", line 1140, in wrap_fake_exception return fn() File "torch/_dynamo/utils.py", line 1600, in <lambda> lambda: run_node(tx.output, node, args, kwargs, nnmodule) File "torch/_dynamo/utils.py", line 1720, in run_node raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e File "torch/_dynamo/utils.py", line 1699, in run_node return node.target(*args, **kwargs) File "torch/utils/_stats.py", line 20, in wrapper return fn(*args, **kwargs) File "torch/_subclasses/fake_tensor.py", line 1637, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "torch/_subclasses/fake_tensor.py", line 1975, in dispatch return self._dispatch_impl(func, types, args, kwargs) File "torch/_subclasses/fake_tensor.py", line 2190, in _dispatch_impl r = func(*args, **kwargs) File "torch/_ops.py", line 571, in __call__ return self_._op(*args, **kwargs) File "torch/_prims_common/wrappers.py", line 252, in _fn result = fn(*args, **kwargs) ``` Pull Request resolved: #118907 Approved by: https://github.com/lezcano (cherry picked from commit 3a1ae86)
Sorry for spamming, I just need to test the new cherry picking script. No need to review this. This is an one time test. |
pytorchmergebot
pushed a commit
that referenced
this pull request
Feb 12, 2024
After pytorch/test-infra#4758, we can create a new workflow on PyTorch to receive `try-cherry-pick` dispatch event from the bot, and create the cherry pick PR. * [x] Cherry pick a PR after it has been landed and create a cherry pick PR to the target release branch. * [ ] The second part after this is to update the release tracker with the info. This will be done in a subsequent PR. * [ ] ghstack is not yet supported * [ ] Cherry pick a reverted commit is not yet supported (from @kit1980 comment) ### Testing The script can be used locally: ``` python cherry_pick.py --onto release/2.2 --classification release --github-actor huydhn 118907 The cherry pick PR is at #119351 ``` The test cherry pick PR is created at #119351 Unit testing this on CI is tricky, so I test this out on canary instead. * pytorch/pytorch-canary#193 (comment) creates the PR at pytorch/pytorch-canary#201 * One more test on canary with the new token pytorch/pytorch-canary#193 (comment). The minimum required permission from what I see is `workflow` * Cherry picking conflicts could happen and needs to be handled manually pytorch/pytorch-canary#194 (comment) * ~Require a linked issue when cherry picking regressions, critical fixes, or fixing new features pytorch/pytorch-canary#193 (comment) Relax this requirement to a suggestion Pull Request resolved: #119352 Approved by: https://github.com/atalman
clee2000
pushed a commit
that referenced
this pull request
Feb 14, 2024
After pytorch/test-infra#4758, we can create a new workflow on PyTorch to receive `try-cherry-pick` dispatch event from the bot, and create the cherry pick PR. * [x] Cherry pick a PR after it has been landed and create a cherry pick PR to the target release branch. * [ ] The second part after this is to update the release tracker with the info. This will be done in a subsequent PR. * [ ] ghstack is not yet supported * [ ] Cherry pick a reverted commit is not yet supported (from @kit1980 comment) ### Testing The script can be used locally: ``` python cherry_pick.py --onto release/2.2 --classification release --github-actor huydhn 118907 The cherry pick PR is at #119351 ``` The test cherry pick PR is created at #119351 Unit testing this on CI is tricky, so I test this out on canary instead. * pytorch/pytorch-canary#193 (comment) creates the PR at pytorch/pytorch-canary#201 * One more test on canary with the new token pytorch/pytorch-canary#193 (comment). The minimum required permission from what I see is `workflow` * Cherry picking conflicts could happen and needs to be handled manually pytorch/pytorch-canary#194 (comment) * ~Require a linked issue when cherry picking regressions, critical fixes, or fixing new features pytorch/pytorch-canary#193 (comment) Relax this requirement to a suggestion Pull Request resolved: #119352 Approved by: https://github.com/atalman
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix internal failure D53291154
from alban: the change is breaking because the alpha argument is now kwarg only (via the * marker) while it was ok for it to be positional before for the rsub.Scalar overload