Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse CSR: Add torch.sin #68123

Closed
wants to merge 17 commits into from
Closed

Conversation

krshrimali
Copy link
Contributor

@krshrimali krshrimali commented Nov 10, 2021

This PR attempts to add support for torch.sin for sparse CSR tensors.

This aims to be a revised implementation (in some form) of #68083, and the implementation aims to be similar to that in SparseTensorMath.cpp file

The tests and empty_like support for sparse CSR tensors (with a minor correction) are borrowed from #68083 temporarily to assist CI with testing this PR. :)

cc @nikitaved @pearu @cpuhrsch @IvanYashchuk @krshrimali

@pytorch-probot
Copy link

pytorch-probot bot commented Nov 10, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/krshrimali/pytorch/blob/97de32358d5be5e7ecfdefc3b83e61b16f87a6b1/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos 🚫 skipped
macos-10-15-py3-x86-64 ciflow/all, ciflow/macos 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Nov 10, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 97de323 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@krshrimali krshrimali removed the request for review from ezyang November 10, 2021 18:05
@krshrimali krshrimali marked this pull request as draft November 10, 2021 18:05
@krshrimali krshrimali added the module: sparse Related to torch.sparse label Nov 11, 2021
torch/testing/_internal/common_methods_invocations.py Outdated Show resolved Hide resolved
Comment on lines 960 to 975
@ops(_sparse_csr_ops)
def test_sparse_csr_no_error(self, device, dtype, op):
samples = op.sample_inputs(device, dtype)

if len(samples) == 0:
self.skipTest("Skipped! No sample inputs!")

for sample in samples:
assert isinstance(sample.input, torch.Tensor)
if sample.input.ndim != 2:
continue

expected = op(sample.input)
assert torch.is_tensor(expected)
output = op(sample.input.to_sparse_csr())
assert torch.is_tensor(output)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plan is to look at it again, and have a specialized test for sparse CSR tensors ((if everyone agrees)).

I plan to take a look at it in a separate PR next week.

@krshrimali krshrimali marked this pull request as ready for review November 12, 2021 11:24
@saketh-are saketh-are added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Nov 12, 2021
Copy link
Collaborator

@IvanYashchuk IvanYashchuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having an OpInfo-based test that checks that the function doesn't error out on sparse input is good, but let's also add the correctness check.

aten/src/ATen/native/TensorFactories.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
test/test_sparse_csr.py Outdated Show resolved Hide resolved
test/test_sparse_csr.py Show resolved Hide resolved
test/test_sparse_csr.py Outdated Show resolved Hide resolved
Comment on lines +316 to +318
" for self, and ",
src.layout(),
" for src");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Earlier, the message ended with something like:

but got self, src: kStridedkStrided

Comment on lines +68 to +70
if (result.numel() == 0) {
at::native::resize_as_sparse_csr_(result, self);
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at::native::resize_output(result.col_indices(), self.col_indices().sizes());
// OR
result.col_indices().resize_(self.col_indices().sizes());

don't resize the result.col_indices() tensor. Is this a bug, or is it expected? cc: @IvanYashchuk @cpuhrsch

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we write

auto col_indices = result.col_indices();
col_indices.resize_(self.col_indices().sizes());

We would see that col_indices is resized, but result.col_indices() would have the original size. To replace result.col_indices() we would need to use set_member_tensors method (result.unsafeGetTensorImpl()->set_member_tensors(...)`).
It's expected, this issue is related #63549.

Copy link
Collaborator

@IvanYashchuk IvanYashchuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @krshrimali! I've got only two minor comments. In a follow-up PR, we should expand the tests to out and inplace variants.

aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp Outdated Show resolved Hide resolved
aten/src/ATen/native/TensorFactories.cpp Outdated Show resolved Hide resolved
@facebook-github-bot
Copy link
Contributor

@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

raise ValueError("Expected 2D tensor but got tensor with dimension: {sample.input.ndim}.")

expected = op(sample.input)
assert torch.is_tensor(expected)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: it might be better to use self.assertTrue to get a better error message (we can do that in a follow-up PR or together with other changes for more unary ops)

@facebook-github-bot
Copy link
Contributor

@cpuhrsch merged this pull request in 833dcaf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: sparse Related to torch.sparse open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants