Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for mul(compressed, wrapped scalar) #91239

Closed
wants to merge 9 commits into from

Conversation

nikitaved
Copy link
Collaborator

@nikitaved nikitaved commented Dec 21, 2022

Fixes #90819.

The path with Scalar should have been picked up by the dispatcher, but still the path with a 0-dim wrapped scalar was broken.

cc @pearu @cpuhrsch @amjames @bhosmer

@nikitaved nikitaved added module: sparse Related to torch.sparse release notes: sparse release notes category labels Dec 21, 2022
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 21, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91239

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b049b8f:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a few nits but overall looks good! Thanks, @nikitaved!

test/test_sparse_csr.py Outdated Show resolved Hide resolved
if dtype == torch.result_type(sparse, scalar) and not enable_hybrid:
res_in_dense = sparse.to_dense().mul_(scalar)
res_in = sparse.mul_(scalar)
self.assertEqual(res_in.to_dense(), res_in_dense)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit as above:

Suggested change
self.assertEqual(res_in.to_dense(), res_in_dense)
self.assertEqual(res_in, res_in_dense)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it tested? :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it originates from #88749

test/test_sparse_csr.py Outdated Show resolved Hide resolved
if (t_.layout() == kStrided && src_.is_sparse_csr()) {
return mul_out_sparse_csr(t_.sparse_mask(src_), src_, r);
}
TORCH_CHECK(r.is_sparse_csr(), "Expected result Tensor to be of format CSR");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note for a possible follow-up: generalizing this function to other sparse compressed layouts looks like a straightforward task. It would remove one of the blockers in test_gradcheck_sparse_csc_input of test/test_autograd.py.

@nikitaved
Copy link
Collaborator Author

Dispatcher is acting weirdly here. There is a mul.Scalar overload for sparse compressed formats, but dispatcher seems to ignore it for some reason...

@nikitaved nikitaved requested a review from pearu December 21, 2022 12:52
@nikitaved nikitaved added the ciflow/trunk Trigger trunk jobs on your pull request label Dec 21, 2022
Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after lint issues is resolved!

Copy link
Contributor

@cpuhrsch cpuhrsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepting to unblock land. Please fix remaining issues. Thanks for sending this!

@nikitaved
Copy link
Collaborator Author

@pearu, could you please unblock by greening your review?

Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks, @nikitaved!

@nikitaved
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

ShisuiUzumaki pushed a commit to ShisuiUzumaki/pytorch that referenced this pull request Dec 23, 2022
Fixes pytorch#90819.

The path with `Scalar` should have been picked up by the dispatcher, but still the path with a 0-dim wrapped scalar was broken.

Pull Request resolved: pytorch#91239
Approved by: https://github.com/pearu, https://github.com/cpuhrsch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: sparse Related to torch.sparse open source release notes: sparse release notes category
Projects
None yet
5 participants