Skip to content

DISABLED test_coalesce_reference_cycle_cpu_float64 (__main__.TestSparseCPU) #89395

@pytorch-bot

Description

@pytorch-bot

Platforms: dynamo

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.

Debugging instructions (after clicking on the recent samples link):
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_coalesce_reference_cycle_cpu_float64
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
2022-11-20T01:57:36.5512960Z   test_coalesce_reference_cycle_cpu_float64 (__main__.TestSparseCPU) ... [2022-11-20 01:57:12,155] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT test_coalesce_reference_cycle test_sparse.py line 272 
2022-11-20T01:57:36.5513300Z due to: 
2022-11-20T01:57:36.5513473Z Traceback (most recent call last):
2022-11-20T01:57:36.5513866Z   File "/opt/conda/lib/python3.7/site-packages/torch/_dynamo/variables/builder.py", line 820, in wrap_fx_proxy_cls
2022-11-20T01:57:36.5514214Z     + f"{typestr(example_value)} {proxy.node.op} {proxy.node.target}"
2022-11-20T01:57:36.5514633Z AssertionError: torch.* op returned non-Tensor _WeakTensorRef call_function <class 'torch._C._WeakTensorRef'>
2022-11-20T01:57:36.5514849Z 
2022-11-20T01:57:36.5514920Z from user code:
2022-11-20T01:57:36.5515153Z    File "test_sparse.py", line 278, in test_coalesce_reference_cycle
2022-11-20T01:57:36.5515399Z     t_ref = torch._C._WeakTensorRef(t)
2022-11-20T01:57:36.5515527Z 
2022-11-20T01:57:36.5515642Z Set torch._dynamo.config.verbose=True for more information
2022-11-20T01:57:36.5515799Z 
2022-11-20T01:57:36.5515803Z 
2022-11-20T01:57:36.5516068Z [2022-11-20 01:57:12,164] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT test_sparse_sum test_sparse.py line 284 
2022-11-20T01:57:36.5516328Z due to: 
2022-11-20T01:57:36.5516511Z Traceback (most recent call last):
2022-11-20T01:57:36.5516853Z   File "/opt/conda/lib/python3.7/site-packages/torch/_dynamo/utils.py", line 1093, in run_node
2022-11-20T01:57:36.5517126Z     return node.target(*args, **kwargs)
2022-11-20T01:57:36.5517668Z RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
2022-11-20T01:57:36.5517964Z 
2022-11-20T01:57:36.5518089Z The above exception was the direct cause of the following exception:
2022-11-20T01:57:36.5518255Z 
2022-11-20T01:57:36.5518347Z Traceback (most recent call last):
2022-11-20T01:57:36.5518698Z   File "/opt/conda/lib/python3.7/site-packages/torch/_dynamo/utils.py", line 1104, in run_node
2022-11-20T01:57:36.5518942Z     ) from e
2022-11-20T01:57:36.5519550Z RuntimeError: Failed running call_function <built-in method sparse_coo_tensor of type object at 0x7fa2bdc44d08>(*(FakeTensor(FakeTensor(..., device='meta', size=(2, 1), dtype=torch.int64), cpu), FakeTensor(FakeTensor(..., device='meta', size=(1, 1, 4), dtype=torch.float64), cpu)), **{}):
2022-11-20T01:57:36.5520294Z The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
2022-11-20T01:57:36.5520690Z (scroll up for backtrace)
2022-11-20T01:57:36.5520809Z 
2022-11-20T01:57:36.5520978Z The above exception was the direct cause of the following exception:
2022-11-20T01:57:36.5521145Z 
2022-11-20T01:57:36.5521224Z Traceback (most recent call last):
2022-11-20T01:57:36.5521589Z   File "/opt/conda/lib/python3.7/site-packages/torch/_dynamo/utils.py", line 1072, in get_fake_value
2022-11-20T01:57:36.5521873Z     raise TorchRuntimeError() from e
2022-11-20T01:57:36.5522112Z torch._dynamo.exc.TorchRuntimeError: 
2022-11-20T01:57:36.5522244Z 
2022-11-20T01:57:36.5522315Z from user code:
2022-11-20T01:57:36.5522530Z    File "test_sparse.py", line 288, in test_sparse_sum
2022-11-20T01:57:36.5522765Z     S = torch.sparse_coo_tensor(i, v)
2022-11-20T01:57:36.5522879Z 
2022-11-20T01:57:36.5523008Z Set torch._dynamo.config.verbose=True for more information
2022-11-20T01:57:36.5523164Z 
2022-11-20T01:57:36.5523168Z 
2022-11-20T01:57:36.5523236Z FAIL (0.015s)

cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: dynamomodule: flaky-testsProblem is a flaky test in CImodule: sparseRelated to torch.sparseskippedDenotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions