-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Enable test_triton_fx_graph_with_et_xpu to run with XPU #169181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable test_triton_fx_graph_with_et_xpu to run with XPU #169181
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/169181
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit e873c34 with merge base 8ca51be ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
3782ead to
b0d83d5
Compare
|
@EikanWang @guangyey @chuanqi129 Please help with review |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
b0d83d5 to
10a4618
Compare
10a4618 to
63b45e0
Compare
|
CI failed with unrelated, flaky tests, rebased. |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
63b45e0 to
e873c34
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Change hardcoded `"cuda:0"` to `device` param to allow to running `test_triton_fx_graph_with_et` on different devices, especially test now passed on XPU. Simplify skip conditions and make minor refactor. Fixes intel/torch-xpu-ops#2040 Pull Request resolved: pytorch#169181 Approved by: https://github.com/guangyey, https://github.com/jansel, https://github.com/EikanWang
Change hardcoded
"cuda:0"todeviceparam to allow to runningtest_triton_fx_graph_with_eton different devices, especially test now passed on XPU.Simplify skip conditions and make minor refactor.
Fixes intel/torch-xpu-ops#2040