New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are tensors on same device? #62653
Are tensors on same device? #62653
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 858c752 (more details on the Dr. CI page):
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages: linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) (1/2)Step: "Unknown" (full log | diagnosis details | 🔁 rerun)
|
This pull request was exported from Phabricator. Differential Revision: D29924464 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D29924464 |
d6107e7
to
a1d3603
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
a1d3603
to
5ad598f
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
5ad598f
to
7563b78
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
7563b78
to
2871296
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
2871296
to
65fe0b1
Compare
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
This pull request was exported from Phabricator. Differential Revision: D29924464 |
65fe0b1
to
1e08211
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
1e08211
to
4c29dd2
Compare
Summary: Pull Request resolved: pytorch/pytorch#62653 This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code. The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action. Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen. Reviewed By: ngimel Differential Revision: D29924464 fbshipit-source-id: 18110f07f5b2dba9d231f767cfb0532849255bc7
4c29dd2
to
f77f1f1
Compare
This pull request was exported from Phabricator. Differential Revision: D29924464 |
Summary: Pull Request resolved: pytorch#728 Pull Request resolved: pytorch/pytorch#62653 This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code. The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action. Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen. Reviewed By: ngimel Differential Revision: D29924464 fbshipit-source-id: dd2bc7f163366ca43e8eb00da0227e9ef972c636
Summary: Pull Request resolved: pytorch/FBGEMM#728 Pull Request resolved: pytorch#62653 This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code. The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action. Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen. Test Plan: ``` buck test //caffe2/torch/fb/sparsenn:gpu_test buck test //caffe2/torch/fb/sparsenn:test ``` Reviewed By: ngimel Differential Revision: D29924464 fbshipit-source-id: 6c575dda8b707eb6df7e9675d2bb62ec8e541753
This pull request was exported from Phabricator. Differential Revision: D29924464 |
f77f1f1
to
858c752
Compare
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Summary:
This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.
The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.
Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.
Test Plan:
Differential Revision: D29924464