Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch] validate that SparseTensorImpl::dim needn't be overridden #49767

Closed
wants to merge 5 commits into from

Conversation

swolchok
Copy link
Contributor

@swolchok swolchok commented Dec 22, 2020

Stack from ghstack:

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: D25686830

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 22, 2020

💊 CI failures summary and remediations

As of commit 2f2c5cc (more details on the Dr. CI page):


  • 2/3 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)
  • 1/3 broken upstream at merge base c0551c7 on Dec 22 from 7:58am to 5:30pm

1 job timed out:

  • pytorch_linux_bionic_py3_8_gcc9_coverage_test1

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 41 times.

…needn't be overridden"

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)

[ghstack-poisoned]
swolchok added a commit that referenced this pull request Dec 23, 2020
Pull Request resolved: #49767

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.
ghstack-source-id: 119068225

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)
@swolchok swolchok requested a review from ngimel January 5, 2021 19:09
…verridden"

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)

[ghstack-poisoned]
…verridden"

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)

[ghstack-poisoned]
@ezyang ezyang self-requested a review January 6, 2021 18:52
@@ -70,6 +70,7 @@ void SparseTensorImpl::set_storage_offset(int64_t storage_offset) {
}

int64_t SparseTensorImpl::dim() const {
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(sparse_dim_ + dense_dim_ == TensorImpl::dim());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why debug only? Do we have full coverage for our debug builds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beats me, but slowing down dim() in prod seems like a bad choice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, but you don't have to land it. This would just ensure our OSS CI at least runs over it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done #50172

@swolchok
Copy link
Contributor Author

swolchok commented Jan 7, 2021

#50172 looks fine. a single test failed due to taking too long

…verridden"

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.

Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 1a1b665.

@facebook-github-bot facebook-github-bot deleted the gh/swolchok/57/head branch January 11, 2021 15:17
hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 14, 2021
…ytorch#49767)

Summary:
Pull Request resolved: pytorch#49767

I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.
ghstack-source-id: 119528066

Test Plan: CI

Reviewed By: ezyang, bhosmer

Differential Revision: D25686830

fbshipit-source-id: f931394d3de6df7f6c5c68fe8ab711d90d3b12fd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants