-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sparse compressed tensor validation without syncs for low-(batch)dim tensors. #94048
sparse compressed tensor validation without syncs for low-(batch)dim tensors. #94048
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/94048
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 15c0078: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
232de1f
to
ebdac2d
Compare
ebdac2d
to
e70e57e
Compare
e70e57e
to
828ab25
Compare
LGTM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neat!
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Successfully rebased |
ab7363f
to
bd78cc2
Compare
This PR improves COO intersection primitives by: * making it sync-less (dims <= 8, can be changed to any value that fits stack). * improving performance with much less kernel calls. Pull Request resolved: #92976 Approved by: https://github.com/cpuhrsch, https://github.com/pearu
This reverts commit b033594. Reverted #92976 on behalf of https://github.com/seemethere due to Need to revert this so I can revert #94048 cleanly
@pytorchbot revert -c ghfirst -m "Sign compare between size_t and int64_t is not allowed" |
@pytorchbot successfully started a revert job. Check the current status here. |
@nikitaved your PR has been successfully reverted. |
…tch)dim tensors. (#94048)" This reverts commit 7901f2d. Reverted #94048 on behalf of https://github.com/seemethere due to Sign compare between size_t and int64_t is not allowed
…tensors. (#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch/pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
This reverts commit b033594. Reverted pytorch/pytorch#92976 on behalf of https://github.com/seemethere due to Need to revert this so I can revert pytorch/pytorch#94048 cleanly
…tch)dim tensors. (#94048)" This reverts commit 7901f2d. Reverted pytorch/pytorch#94048 on behalf of https://github.com/seemethere due to Sign compare between size_t and int64_t is not allowed
…tensors. (#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch/pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
This reverts commit b033594. Reverted pytorch/pytorch#92976 on behalf of https://github.com/seemethere due to Need to revert this so I can revert pytorch/pytorch#94048 cleanly
…tch)dim tensors. (#94048)" This reverts commit 7901f2d. Reverted pytorch/pytorch#94048 on behalf of https://github.com/seemethere due to Sign compare between size_t and int64_t is not allowed
…taved/compressed_tensor_validation_remove_sync
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…#92976)" This reverts commit b033594. Reverted pytorch#92976 on behalf of https://github.com/seemethere due to Need to revert this so I can revert pytorch#94048 cleanly
…tch)dim tensors. (pytorch#94048)" This reverts commit 7901f2d. Reverted pytorch#94048 on behalf of https://github.com/seemethere due to Sign compare between size_t and int64_t is not allowed
…tensors. (pytorch#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
…tensors. (#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch/pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
…tensors. (#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch/pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
…#92976)" This reverts commit b033594. Reverted pytorch#92976 on behalf of https://github.com/seemethere due to Need to revert this so I can revert pytorch#94048 cleanly
…tch)dim tensors. (pytorch#94048)" This reverts commit 7901f2d. Reverted pytorch#94048 on behalf of https://github.com/seemethere due to Sign compare between size_t and int64_t is not allowed
…tensors. (pytorch#94048) As per title. Sync is still unavoidable for super high-dim tensors. Pull Request resolved: pytorch#94048 Approved by: https://github.com/alexsamardzic, https://github.com/cpuhrsch
…tch)dim tensors. (pytorch#94048)" This reverts commit 7901f2d.
As per title. Sync is still unavoidable for super high-dim tensors.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer