Skip to content

Conversation

huydhn
Copy link
Contributor

@huydhn huydhn commented Feb 7, 2024

Fix internal failure D53291154

from alban: the change is breaking because the alpha argument is now kwarg only (via the * marker) while it was ok for it to be positional before for the rsub.Scalar overload

 _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "torch/_dynamo/eval_frame.py", line 453, in _fn
    return fn(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "torch/_dynamo/eval_frame.py", line 615, in catch_errors
    return callback(frame, cache_entry, hooks, frame_state)
  File "torch/_dynamo/convert_frame.py", line 390, in _convert_frame_assert
    return _compile(
  File "python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "torch/_dynamo/convert_frame.py", line 650, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "torch/_dynamo/utils.py", line 248, in time_wrapper
    r = func(*args, **kwargs)
  File "torch/_dynamo/convert_frame.py", line 531, in compile_inner
    out_code = transform_code_object(code, transform)
  File "torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
    transformations(instructions, code_options)
  File "torch/_dynamo/convert_frame.py", line 155, in _fn
    return fn(*args, **kwargs)
  File "torch/_dynamo/convert_frame.py", line 496, in transform
    tracer.run()
  File "torch/_dynamo/symbolic_convert.py", line 2125, in run
    super().run()
  File "torch/_dynamo/symbolic_convert.py", line 787, in run
    and self.step()
  File "torch/_dynamo/symbolic_convert.py", line 750, in step
    getattr(self, inst.opname)(inst)
  File "torch/_dynamo/symbolic_convert.py", line 469, in wrapper
    return inner_fn(self, inst)
  File "torch/_dynamo/symbolic_convert.py", line 1249, in CALL_FUNCTION_KW
    self.call_function(fn, args, kwargs)
  File "torch/_dynamo/symbolic_convert.py", line 651, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "torch/_dynamo/variables/torch.py", line 614, in call_function
    tensor_variable = wrap_fx_proxy(
  File "torch/_dynamo/variables/builder.py", line 1285, in wrap_fx_proxy
    return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
  File "torch/_dynamo/variables/builder.py", line 1370, in wrap_fx_proxy_cls
    example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
  File "torch/_dynamo/utils.py", line 1653, in get_fake_value
    raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
  File "torch/_dynamo/utils.py", line 1599, in get_fake_value
    ret_val = wrap_fake_exception(
  File "torch/_dynamo/utils.py", line 1140, in wrap_fake_exception
    return fn()
  File "torch/_dynamo/utils.py", line 1600, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "torch/_dynamo/utils.py", line 1720, in run_node
    raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
  File "torch/_dynamo/utils.py", line 1699, in run_node
    return node.target(*args, **kwargs)
  File "torch/utils/_stats.py", line 20, in wrapper
    return fn(*args, **kwargs)
  File "torch/_subclasses/fake_tensor.py", line 1637, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)
  File "torch/_subclasses/fake_tensor.py", line 1975, in dispatch
    return self._dispatch_impl(func, types, args, kwargs)
  File "torch/_subclasses/fake_tensor.py", line 2190, in _dispatch_impl
    r = func(*args, **kwargs)
  File "torch/_ops.py", line 571, in __call__
    return self_._op(*args, **kwargs)
  File "torch/_prims_common/wrappers.py", line 252, in _fn
    result = fn(*args, **kwargs)

Fix internal failure D53291154

from alban: the change is breaking because the alpha argument is now kwarg only (via the * marker) while it was ok for it to be positional before for the rsub.Scalar overload

```
 _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "torch/_dynamo/eval_frame.py", line 453, in _fn
    return fn(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "torch/_dynamo/eval_frame.py", line 615, in catch_errors
    return callback(frame, cache_entry, hooks, frame_state)
  File "torch/_dynamo/convert_frame.py", line 390, in _convert_frame_assert
    return _compile(
  File "python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "torch/_dynamo/convert_frame.py", line 650, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "torch/_dynamo/utils.py", line 248, in time_wrapper
    r = func(*args, **kwargs)
  File "torch/_dynamo/convert_frame.py", line 531, in compile_inner
    out_code = transform_code_object(code, transform)
  File "torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
    transformations(instructions, code_options)
  File "torch/_dynamo/convert_frame.py", line 155, in _fn
    return fn(*args, **kwargs)
  File "torch/_dynamo/convert_frame.py", line 496, in transform
    tracer.run()
  File "torch/_dynamo/symbolic_convert.py", line 2125, in run
    super().run()
  File "torch/_dynamo/symbolic_convert.py", line 787, in run
    and self.step()
  File "torch/_dynamo/symbolic_convert.py", line 750, in step
    getattr(self, inst.opname)(inst)
  File "torch/_dynamo/symbolic_convert.py", line 469, in wrapper
    return inner_fn(self, inst)
  File "torch/_dynamo/symbolic_convert.py", line 1249, in CALL_FUNCTION_KW
    self.call_function(fn, args, kwargs)
  File "torch/_dynamo/symbolic_convert.py", line 651, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "torch/_dynamo/variables/torch.py", line 614, in call_function
    tensor_variable = wrap_fx_proxy(
  File "torch/_dynamo/variables/builder.py", line 1285, in wrap_fx_proxy
    return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
  File "torch/_dynamo/variables/builder.py", line 1370, in wrap_fx_proxy_cls
    example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
  File "torch/_dynamo/utils.py", line 1653, in get_fake_value
    raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
  File "torch/_dynamo/utils.py", line 1599, in get_fake_value
    ret_val = wrap_fake_exception(
  File "torch/_dynamo/utils.py", line 1140, in wrap_fake_exception
    return fn()
  File "torch/_dynamo/utils.py", line 1600, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "torch/_dynamo/utils.py", line 1720, in run_node
    raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
  File "torch/_dynamo/utils.py", line 1699, in run_node
    return node.target(*args, **kwargs)
  File "torch/utils/_stats.py", line 20, in wrapper
    return fn(*args, **kwargs)
  File "torch/_subclasses/fake_tensor.py", line 1637, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)
  File "torch/_subclasses/fake_tensor.py", line 1975, in dispatch
    return self._dispatch_impl(func, types, args, kwargs)
  File "torch/_subclasses/fake_tensor.py", line 2190, in _dispatch_impl
    r = func(*args, **kwargs)
  File "torch/_ops.py", line 571, in __call__
    return self_._op(*args, **kwargs)
  File "torch/_prims_common/wrappers.py", line 252, in _fn
    result = fn(*args, **kwargs)
```

Pull Request resolved: #118907
Approved by: https://github.com/lezcano

(cherry picked from commit 3a1ae86)
Copy link

pytorch-bot bot commented Feb 7, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/119351

Note: Links to docs will display an error until the docs builds have been completed.

❌ 36 New Failures, 2 Unrelated Failures

As of commit df9c6ca with merge base a8bd593 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@huydhn
Copy link
Contributor Author

huydhn commented Feb 7, 2024

Sorry for spamming, I just need to test the new cherry picking script. No need to review this. This is an one time test.

@huydhn huydhn closed this Feb 7, 2024
@huydhn huydhn mentioned this pull request Feb 7, 2024
4 tasks
@huydhn huydhn deleted the cherry-pick-118907-by-huydhn branch February 7, 2024 17:34
pytorchmergebot pushed a commit that referenced this pull request Feb 12, 2024
After pytorch/test-infra#4758, we can create a new workflow on PyTorch to receive `try-cherry-pick` dispatch event from the bot, and create the cherry pick PR.

* [x] Cherry pick a PR after it has been landed and create a cherry pick PR to the target release branch.
* [ ] The second part after this is to update the release tracker with the info.  This will be done in a subsequent PR.
* [ ] ghstack is not yet supported
* [ ] Cherry pick a reverted commit is not yet supported (from @kit1980 comment)

### Testing

The script can be used locally:

```
python cherry_pick.py --onto release/2.2 --classification release --github-actor huydhn 118907
The cherry pick PR is at #119351
```

The test cherry pick PR is created at #119351

Unit testing this on CI is tricky, so I test this out on canary instead.

* pytorch/pytorch-canary#193 (comment) creates the PR at pytorch/pytorch-canary#201
  * One more test on canary with the new token pytorch/pytorch-canary#193 (comment).  The minimum required permission from what I see is `workflow`
* Cherry picking conflicts could happen and needs to be handled manually pytorch/pytorch-canary#194 (comment)
* ~Require a linked issue when cherry picking regressions, critical fixes, or fixing new features pytorch/pytorch-canary#193 (comment) Relax this requirement to a suggestion
Pull Request resolved: #119352
Approved by: https://github.com/atalman
clee2000 pushed a commit that referenced this pull request Feb 14, 2024
After pytorch/test-infra#4758, we can create a new workflow on PyTorch to receive `try-cherry-pick` dispatch event from the bot, and create the cherry pick PR.

* [x] Cherry pick a PR after it has been landed and create a cherry pick PR to the target release branch.
* [ ] The second part after this is to update the release tracker with the info.  This will be done in a subsequent PR.
* [ ] ghstack is not yet supported
* [ ] Cherry pick a reverted commit is not yet supported (from @kit1980 comment)

### Testing

The script can be used locally:

```
python cherry_pick.py --onto release/2.2 --classification release --github-actor huydhn 118907
The cherry pick PR is at #119351
```

The test cherry pick PR is created at #119351

Unit testing this on CI is tricky, so I test this out on canary instead.

* pytorch/pytorch-canary#193 (comment) creates the PR at pytorch/pytorch-canary#201
  * One more test on canary with the new token pytorch/pytorch-canary#193 (comment).  The minimum required permission from what I see is `workflow`
* Cherry picking conflicts could happen and needs to be handled manually pytorch/pytorch-canary#194 (comment)
* ~Require a linked issue when cherry picking regressions, critical fixes, or fixing new features pytorch/pytorch-canary#193 (comment) Relax this requirement to a suggestion
Pull Request resolved: #119352
Approved by: https://github.com/atalman
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants