-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Closed
Labels
module: flaky-testsProblem is a flaky test in CIProblem is a flaky test in CImodule: nestedtensorNestedTensor tag see issue #25032NestedTensor tag see issue #25032skippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Platforms: linux, rocm
This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
- Click on the workflow logs linked above
- Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
- Grep for
test_backward_sum_cuda_float32 - There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3070, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3070, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3069, in wrapper
with policy():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2451, in __exit__
raise RuntimeError(msg)
RuntimeError: CUDA driver API confirmed a leak in __main__.TestNestedTensorOpInfoCUDA.test_backward_sum_cuda_float32! Caching allocator allocated memory was 0 and is now reported as 142848 on device 0. CUDA driver allocated memory was 1571815424 and is now 1573912576.
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_nestedtensor.py TestNestedTensorOpInfoCUDA.test_backward_sum_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Test file path: test_nestedtensor.py
cc @clee2000 @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
Metadata
Metadata
Assignees
Labels
module: flaky-testsProblem is a flaky test in CIProblem is a flaky test in CImodule: nestedtensorNestedTensor tag see issue #25032NestedTensor tag see issue #25032skippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module