Skip to content

Conversation

pearu
Copy link
Collaborator

@pearu pearu commented Sep 14, 2022

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 14, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85031

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures, 2 Pending

As of commit 926ec25:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pearu pearu self-assigned this Sep 14, 2022
@pearu pearu added the module: sparse Related to torch.sparse label Sep 14, 2022
@pearu pearu requested review from cpuhrsch and amjames September 14, 2022 19:22
@pearu pearu added the topic: new features topic category label Sep 14, 2022
pearu added a commit that referenced this pull request Sep 14, 2022
@cpuhrsch
Copy link
Contributor

Is it really necessary to turn on support for all zero preserving unary operators to resolve this? I don't think we have good tests for view semantics + autograd for sparse layouts. In particular I'm worried about transpose + .

@pearu pearu changed the title Enable unary inplace op for all sparse compressed layouts. Enable unary elementwise inplace ops for all sparse compressed layouts. Sep 15, 2022
@pearu
Copy link
Collaborator Author

pearu commented Sep 15, 2022

Is it really necessary to turn on support for all zero preserving unary operators to resolve this? I don't think we have good tests for view semantics + autograd for sparse layouts. In particular I'm worried about transpose + .

@cpuhrsch The question is OT because the unary operations that are affected here are element-wise operations such as abs, sin. These operations are defined by invariant op(sparse).values() == op(sparse.values()) and are sparse layout independent. transpose + operations do not belong to this class of operations.

I have updated the PR title but I think this has been confusing because of the naming of unary_op templates in SparseCsrTensorMath.cpp. These should really read unary_ufunc_out, unary_ufunc_inplace, etc, for instance, or similar. Do we want to rename these templates for clarity?

@cpuhrsch
Copy link
Contributor

Ok, but the question still stands that this PR doesn't introduce tests for inplace unary ops for sparse compressed layouts. Do we have those somewhere, but they're just not enabled?

@amjames
Copy link
Collaborator

amjames commented Sep 15, 2022

The only place where the unary_inplace_op template is used is for zeros_ correct? We should verify there are tests for that, and expand them to cover all layouts.

@pearu
Copy link
Collaborator Author

pearu commented Sep 15, 2022

Ok, but the question still stands that this PR doesn't introduce tests for inplace unary ops for sparse compressed layouts. Do we have those somewhere, but they're just not enabled?

The PR does not introduce the tests because these already exist (see test_sparse_csr_unary_inplace in test_sparse_csr.py), however, the tests are run on CSR samples only. Because the unary_op_inplace implementation does not depend on the layout, it is not an issue. Notice that the original assert check was on the dispatch key (that we later turned to a check on layouts) was there for sanity and, in principle, now checking for all compressed sparse layouts in unary_op_inplace restores the original intent of the assert statement: make sure that the template is used on compressed sparse tensors only. In fact, the template is very general, it only requires that the input to it has .values() method defined, so, restricting the template to specific sparse layouts as it was before, has little sense.

The only place where the unary_inplace_op template is used is for zeros_ correct?

Incorrect. The template is used for all unary elementwise ops that preserve zeros. In fact, after #85030, zero_ does not use the unary_inplace_op template anymore. Moreover, zero_ OpInfo definition does not define it as a unary operation and there are no samples with sparse layouts generated for zero_ tests. The same problem exists for all other factory functions having the layout argument but, while it is an issue, it is OT here.

@cpuhrsch
Copy link
Contributor

@pytorchbot merge this

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 15, 2022

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: unrecognized arguments: this

usage: @pytorchbot [-h] {merge,revert,rebase,label} ...

Try @pytorchbot --help for more info.

@cpuhrsch
Copy link
Contributor

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here and land check progress here.
The merge job was triggered with the land checks (-l) flag. If you did not specify this flag yourself, you are likely enrolled in the land checks rollout. This means that your change will be merged once all checks on your PR and the land checks have passed (ETA 4 Hours). If you need to coordinate lands between different changes and cannot risk a land race, please add the ciflow/trunk label to your PR and wait for signal to complete, and then land your changes in proper order. Having trunk, pull, and Lint pre-run on a PR will bypass land checks and the ETA should be immediate. If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@facebook-github-bot facebook-github-bot deleted the gh/pearu/58/head branch September 19, 2022 14:19
mehtanirav pushed a commit that referenced this pull request Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants