New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse CSR CUDA: fix input checks for addmm
and mm
#66485
Conversation
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. [ghstack-poisoned]
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 7949f07 (more details on the Dr. CI page):
1 failure not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. ghstack-source-id: ba2b4efb1e9f092c897da0484caa8631fccc527a Pull Request resolved: pytorch#66485
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. ghstack-source-id: af6563cba012fc9eac3206f00037b731eefdd85e Pull Request resolved: pytorch#66485
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. ghstack-source-id: 8a32a9597e02bc05b1ac15752407e30a48381a69 Pull Request resolved: pytorch#66485
const Scalar& beta, | ||
const Scalar& alpha, | ||
Tensor& result) { | ||
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(mat1.is_sparse_csr()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in #63583 TORCH_INTERNAL_ASSERT_DEBUG_ONLY doesn't run in CI or in the binaries we distribute. It might make sense to use a regular internal assert or explicitly throw an error if this condition isn't met.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TORCH_INTERNAL_ASSERT_DEBUG_ONLY
is not something that should be tested usually. It could be all removed and nothing would change. It's just my style of warning some potential future developer (who surely develops with DEBUG=1
) that this function is intended to be used only when mat1
is sparse csr matrix.
I'd rather remove the assert completely, or replace it with a comment, than turning into a regular assert that 100% of the time in the current code evaluates to assert(true);
. It could be expensive to check is_sparse_csr
, we never know without measuring!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine but take a look at the comment about TORCH_INTERNAL_ASSERT_DEBUG_ONLY
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. cc nikitaved pearu cpuhrsch @IvanYashchuk [ghstack-poisoned]
The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. ghstack-source-id: bcdc7fc80b4c1fbb1cc43f4746d5e8b6f43d2e68 Pull Request resolved: pytorch#66485
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Pull Request resolved: #66485 The errors for incorrectly sized inputs should match the dense variants of functions. Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and removed unnecessary device check. cc nikitaved pearu cpuhrsch IvanYashchuk Test Plan: Imported from OSS Reviewed By: jbschlosser Differential Revision: D31764036 Pulled By: cpuhrsch fbshipit-source-id: 76900fe9e4a49474695a01f34bad41cb3422321c
Stack from ghstack:
triangular_solve_out
#62180torch.addmm
#65606torch.add
with all inputs sparse #64391addmv_out
#61536triangular_solve_out
#61858torch.add
with all inputs sparse #63948torch.addmm
with all inputs sparse #63511addmm
andmm
#66485The errors for incorrectly sized inputs should match the dense variants
of functions.
Moved addmm_out_sparse_csr_dense_cuda from SparseCsrTensorMath.cu and
removed unnecessary device check.
cc @nikitaved @pearu @cpuhrsch @IvanYashchuk
Differential Revision: D31764036