Skip to content

Conversation

[ghstack-poisoned]
@amjames amjames requested a review from bdhirsh as a code owner July 1, 2022 19:23
amjames added a commit that referenced this pull request Jul 1, 2022
ghstack-source-id: 9a9e68b
Pull Request resolved: #80781
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 1, 2022

🔗 Helpful links

✅ No Failures (3 Pending)

As of commit 925ca12 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@amjames amjames marked this pull request as draft July 1, 2022 19:24
@amjames amjames added the module: sparse Related to torch.sparse label Jul 1, 2022
@amjames
Copy link
Collaborator Author

amjames commented Jul 1, 2022

An issue that will probably need some discussion. It appears that SciPy does not implement the BSC format, so the existing testing framework is not going to work out of the box. We could do some combination of checking indices with a compatible CSC tensor and manually checking block-by-block value equality, but I'm all for a better solution if there is one on offer.

cc @cpuhrsch @pearu

@cpuhrsch
Copy link
Contributor

cpuhrsch commented Jul 1, 2022

@amjames - I'm am wondering whether support for transpose(BSR) -> BSC and transpose(BSC) -> BSR could help the tests.

amjames added a commit that referenced this pull request Jul 25, 2022
ghstack-source-id: 7be0686
Pull Request resolved: #80781
amjames added a commit that referenced this pull request Jul 26, 2022
ghstack-source-id: 7f91f4e
Pull Request resolved: #80781
Comment on lines 95 to 96
auto batched_out =
at::zeros({n_batch, n_compressed_per_batch + 1}, compressed_in.options());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to change to compressed_in.options().dtype(out_type), then we can remove to right before return from the function.

Comment on lines 104 to 105
auto offsets = compressed_in.index({Slice(0, -1, n_compressed_per_batch)})
.reshape({n_batch, -1});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, we can use slice and reduce dependency on at::indexing.

auto offsets = compressed_in.slice(0, 0, -1, n_compressed_per_batch)
.reshape({n_batch, -1});
batched_out.narrow(-1, 1, n_compressed_per_batch)
.copy_(trailing_slice - offsets);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikitaved I sort of minced your suggestions together with what was already here.

Yes, it could go in a single statement, but it is very difficult to parse and the two intermediate variables trailing_slice and offsets make it much clearer.

Copy link
Collaborator

@nikitaved nikitaved Aug 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And you decided against in-placeness, right? I like these variables. Maybe we could also sprinkle some comments here and there...

Copy link
Collaborator Author

@amjames amjames Aug 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, yes. The copy_ is fine however we can't do the subtraction in place as both factors are views on the same storage, and that sub does in fact broadcast (n_batch x n_compressed_per_batch) - (n_batch x 1)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, missed that. Good catch!

Copy link
Collaborator

@nikitaved nikitaved Aug 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, we could probably copy first, then do an in-place diff ?

Copy link
Collaborator

@nikitaved nikitaved left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @amjames , this looks great!

@amjames
Copy link
Collaborator Author

amjames commented Sep 1, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the land checks (-l) flag. If you did not specify this flag yourself, you are likely enrolled in the land checks rollout. This means that your change will be merged once all checks on your PR have passed since you have added the ciflow/trunk label to your PR (ETA 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2022

Hey @amjames.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@amjames amjames added release notes: sparse release notes category topic: new features topic category labels Sep 1, 2022
facebook-github-bot pushed a commit that referenced this pull request Sep 2, 2022
Summary:
Pull Request resolved: #80781
Approved by: https://github.com/bhosmer, https://github.com/nikitaved

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/8778f337442ab7ad512d20c3a9028df59380c6f0

Reviewed By: mehtanirav

Differential Revision: D39213811

fbshipit-source-id: 90a50de4bf00fc7294e3490c34d75264a036a364
@facebook-github-bot facebook-github-bot deleted the gh/amjames/7/head branch September 5, 2022 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged module: sparse Related to torch.sparse open source release notes: sparse release notes category topic: new features topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants