-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[ROCm][CI] Create periodic-rocm-mi200.yml #166544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166544
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ⏳ No Failures, 13 PendingAs of commit 98717aa with merge base deb7763 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Warning: Unknown label
Please add the new label to .github/pytorch-probot.yml |
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f "Force merging to relieve MI2xx queueing and provide separate workflow to target ROCm MI2xx distributed jobs" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
ciflow/periodic-rocm-mi200to allow us to run distributed tests only on ROCm runners, without triggering many other jobs on theperiodic.ymlworkflow (viaciflow/periodic)ciflow/periodic, thus maintaining the old status quo.linux.rocm.gpu.4label since it targets a lot more CI nodes at this point than the K8s/ARC-basedlinux.rocm.gpu.mi250.4label, as that is still having some network/scaling issues.cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd