-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DISABLED test_correctness_ASGD_use_closure_False_cuda_float32 (__main__.CompiledOptimizerParityTestsCUDA) #125924
Comments
Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
Within ~15 minutes, To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.
We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows. |
Another case of trunk flakiness has been found here. The list of platforms [linux, slow] appears to contain all the recently affected platforms [linux, slow]. Either the change didn't propogate fast enough or disable bot might be broken. |
It looks like there is a memory leak in eager in the Sequential LR Scheduler after I added the tensor LR, let me see what's going on here. |
#126133) SequentialLR and ChainedLR leak memory, so disable these two schedulers until #126131 is fixed. Re-enables #125925 #125925 #125924 Pull Request resolved: #126133 Approved by: https://github.com/yanboliang, https://github.com/aorenste
pytorch#126133) SequentialLR and ChainedLR leak memory, so disable these two schedulers until pytorch#126131 is fixed. Re-enables pytorch#125925 pytorch#125925 pytorch#125924 Pull Request resolved: pytorch#126133 Approved by: https://github.com/yanboliang, https://github.com/aorenste
Resolving the issue because the test is not flaky anymore after 1300 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive |
Platforms: linux, slow
This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
test_correctness_ASGD_use_closure_False_cuda_float32
Sample error message
Test file path:
inductor/test_compiled_optimizers.py
cc @clee2000 @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire
The text was updated successfully, but these errors were encountered: