New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable cat for cuda bits types #115044
enable cat for cuda bits types #115044
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/115044
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 157c7a5 with merge base 4cfe997 (): FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -153,7 +153,14 @@ | |||
logging.warning(e) | |||
|
|||
# Experimental functionality | |||
from quantization.core.experimental.test_bits import TestBits # noqa: F401 | |||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes Steve sad...
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot revert -m "This breaks ROCM" -c nosignal |
@pytorchbot successfully started a revert job. Check the current status here. |
@ngimel your PR has been successfully reverted. |
This reverts commit 4cf97c4. Reverted #115044 on behalf of https://github.com/malfet due to This breaks ROCM ([comment](#115044 (comment)))
@pytorchbot merge -f "flaky test" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
This reverts commit 4cf97c4. Reverted pytorch#115044 on behalf of https://github.com/malfet due to This breaks ROCM ([comment](pytorch#115044 (comment)))
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
This reverts commit 4cf97c4. Reverted pytorch#115044 on behalf of https://github.com/malfet due to This breaks ROCM ([comment](pytorch#115044 (comment)))
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
This reverts commit 4cf97c4. Reverted pytorch#115044 on behalf of https://github.com/malfet due to This breaks ROCM ([comment](pytorch#115044 (comment)))
It was already working for cpu, so bring parity. Also, slightly reduce number of compiled kernels by using OpaqueType. Pull Request resolved: pytorch#115044 Approved by: https://github.com/malfet
It was already working for cpu, so bring parity.
Also, slightly reduce number of compiled kernels by using OpaqueType.