-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PyTorch] validate that SparseTensorImpl::dim needn't be overridden #49767
Conversation
I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/) [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 2f2c5cc (more details on the Dr. CI page):
1 job timed out:
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
…needn't be overridden" I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/) [ghstack-poisoned]
Pull Request resolved: #49767 I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. ghstack-source-id: 119068225 Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/)
…verridden" I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/) [ghstack-poisoned]
…verridden" I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/) [ghstack-poisoned]
@@ -70,6 +70,7 @@ void SparseTensorImpl::set_storage_offset(int64_t storage_offset) { | |||
} | |||
|
|||
int64_t SparseTensorImpl::dim() const { | |||
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(sparse_dim_ + dense_dim_ == TensorImpl::dim()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why debug only? Do we have full coverage for our debug builds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beats me, but slowing down dim()
in prod seems like a bad choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, but you don't have to land it. This would just ensure our OSS CI at least runs over it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done #50172
#50172 looks fine. a single test failed due to taking too long |
…verridden" I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. Differential Revision: [D25686830](https://our.internmc.facebook.com/intern/diff/D25686830/) [ghstack-poisoned]
This pull request has been merged in 1a1b665. |
…ytorch#49767) Summary: Pull Request resolved: pytorch#49767 I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it. ghstack-source-id: 119528066 Test Plan: CI Reviewed By: ezyang, bhosmer Differential Revision: D25686830 fbshipit-source-id: f931394d3de6df7f6c5c68fe8ab711d90d3b12fd
Stack from ghstack:
I'm told that the base implementation should work fine. Let's validate that in an intermediate diff before removing it.
Differential Revision: D25686830