Skip to content

Conversation

@narang99
Copy link
Contributor

  • Only ported copy for sparse tensor to dispatcher. Everything else is the same
  • Duplicated code for named tensor handling in sparse tensor copy
    • Might change it later to handle named tensors using dispatcher

Issue #61122

- Only ported copy for sparse tensor to dispatcher. Everything else is the same
- Duplicated code for named tensor handling in sparse tensor copy
	- Might change it later to handle named tensors using dispatcher
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 18, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit fc729c6 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@codecov
Copy link

codecov bot commented Sep 19, 2021

Codecov Report

Merging #65304 (fc729c6) into master (7f8d622) will decrease coverage by 0.00%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master   #65304      +/-   ##
==========================================
- Coverage   66.45%   66.45%   -0.01%     
==========================================
  Files         735      735              
  Lines       93906    93906              
==========================================
- Hits        62402    62401       -1     
- Misses      31504    31505       +1     

bool non_blocking) {
// TODO: Once copy_ is fully migrated to use dispatcher, handle named
// inference using dispatcher instead of doing it everywhere
auto maybe_outnames = namedinference::compute_broadcast_outnames(self, src);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this making copy sparse more correct than it was before? Because in the old code it seems like names were just not propagated for sparse at all. I think I'd probably prefer that, mostly because it's not worth gooping up the code here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then maybe you can use copy_sparse_to_sparse_; so instead of double dispatch (go to wrapper, than go to copy sparse to sparse), just inline the dispatch table of copy_sparse_to_sparse into copy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uh....I thought the sparse flow was going through name propagation. It went through copy_ which did name prop and called copy_impl containing all implementations.
Please correct me if I'm wrong

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blast, you're right

@ezyang ezyang changed the title [WIP] Added sparse-tensor copy logic to dispatcher Added sparse-tensor copy logic to dispatcher Sep 24, 2021
@facebook-github-bot
Copy link
Contributor

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants