Skip to content

Conversation

cbalioglu
Copy link
Contributor

Thanks to @bdhirsh's work, we now have room for new dispatch keys in DispatchKey enum. This PR adds two new keys for out-of-core Fake Tensor and Deferred Module Initialization features.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 20, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f3815bb (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-27T14:53:34.1898475Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-04-27T14:53:34.1885711Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-04-27T14:53:34.1886760Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-04-27T14:53:34.1888011Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-04-27T14:53:34.1889351Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-04-27T14:53:34.1891075Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-04-27T14:53:34.1892089Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-04-27T14:53:34.1893324Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-04-27T14:53:34.1894605Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-04-27T14:53:34.1896391Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-04-27T14:53:34.1898034Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-04-27T14:53:34.1898475Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-04-27T14:53:34.1898497Z 
2022-04-27T14:53:34.1898632Z Broken ops: [
2022-04-27T14:53:34.1899116Z 	aten::_thnn_fused_lstm_cell_backward_impl(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor)
2022-04-27T14:53:34.1899182Z ]
2022-04-27T14:53:34.2930153Z + cleanup
2022-04-27T14:53:34.2930300Z + retcode=1
2022-04-27T14:53:34.2930425Z + set +x
2022-04-27T14:53:34.2964643Z ##[error]Process completed with exit code 1.
2022-04-27T14:53:34.3002114Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-27T14:53:34.3002185Z with:

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@cbalioglu
Copy link
Contributor Author

Per our discussion in today's meeting, bumping up this PR.

cc @bdhirsh

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that you want Fake to have higher priority than ZeroTensor/Negative/Conjugate? (I think I'd expect it to have ~ the same priority as the Python key, right above backend select, or you might hit weird issues with code that uses forward mode AD / complex numbers)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact it works in both orders, but has different effects (Zero tensor treated as Fake vs. Fake treated as Zero). Anyways, as you suggested, I moved it next to Python to make it more consistent with the behavior of Python subclasses.

Copy link
Contributor

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment, but preemptively approving as per the composability conversation earlier today

@cbalioglu
Copy link
Contributor Author

Note: verified that the failing GitHub Action is not related to the changes included in this PR.

@cbalioglu
Copy link
Contributor Author

@pytorchbot merge this please

@github-actions
Copy link
Contributor

Hey @cbalioglu.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@cbalioglu cbalioglu added release notes: composability release notes category topic: not user facing topic category labels Apr 27, 2022
facebook-github-bot pushed a commit that referenced this pull request Apr 29, 2022
…ion (#76139)

Summary:
Thanks to bdhirsh's work, we now have room for new dispatch keys in `DispatchKey` enum. This PR adds two new keys for out-of-core [Fake Tensor](https://pytorch.org/torchdistx/latest/fake_tensor.html) and [Deferred Module Initialization](https://pytorch.org/torchdistx/latest/deferred_init.html) features.

Pull Request resolved: #76139
Approved by: https://github.com/bdhirsh

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/a0bf0f5611b0986405de28cca2f198eef22ac4ec

Reviewed By: osalpekar

Differential Revision: D35992313

fbshipit-source-id: 045b4e4021605b063da0ad7bdd1ee08ceec468d3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants