Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_storage_meta_errors_cpu should fail but passes in CI #104410

Closed
malfet opened this issue Jun 29, 2023 · 2 comments
Closed

test_storage_meta_errors_cpu should fail but passes in CI #104410

malfet opened this issue Jun 29, 2023 · 2 comments
Assignees
Labels
module: ci Related to continuous integration triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@malfet
Copy link
Contributor

malfet commented Jun 29, 2023

馃悰 Describe the bug

CI signal on #104355 is green, but testing on https://github.com/pytorch/pytorch/actions/runs/5409497721/jobs/9829920184 is also green, which should not be possible.

And indeed local run of the test fails with TypeError:

test_storage_meta_errors_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... /home/nshulga/git/pytorch/pytorch/test/test_torch.py:332: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  s0 = torch.TypedStorage([1, 2, 3, 4], device='meta', dtype=dtype)
ERROR

======================================================================
ERROR: test_storage_meta_errors_cpu_uint8 (__main__.TestTorchDeviceTypeCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/nshulga/git/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/nshulga/git/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 1074, in only_fn
    return fn(slf, *args, **kwargs)
  File "/home/nshulga/git/pytorch/pytorch/test/test_torch.py", line 351, in test_storage_meta_errors
    s0.pin_memory()
  File "/home/nshulga/git/pytorch/pytorch/torch/storage.py", line 842, in pin_memory
    return self._new_wrapped_storage(self._untyped_storage.pin_memory(device=device))
  File "/home/nshulga/git/pytorch/pytorch/torch/storage.py", line 223, in pin_memory
    raise TypeError(f"cannot pin '{self.type()}' only CPU memory can be pinned")
TypeError: cannot pin 'torch.storage.UntypedStorage' only CPU memory can be pinned

----------------------------------------------------------------------
Ran 1 test in 0.014s

FAILED (errors=1)

Versions

CI

cc @ezyang @gchanan @zou3519 @seemethere @pytorch/pytorch-dev-infra

@malfet malfet added high priority module: ci Related to continuous integration labels Jun 29, 2023
@malfet
Copy link
Contributor Author

malfet commented Jun 29, 2023

An even more ridiculous example:

$ python test_segment_reductions.py -v -k test_multi_d_simple_cpu_float32_int32
test_multi_d_simple_cpu_float32_int32 (__main__.TestSegmentReductionsCPU) ... ERROR

======================================================================
ERROR: test_multi_d_simple_cpu_float32_int32 (__main__.TestSegmentReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/nshulga/git/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/nshulga/git/pytorch/pytorch/test/test_segment_reductions.py", line 357, in test_multi_d_simple
    expected_result,
UnboundLocalError: local variable 'expected_result' referenced before assignment

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (errors=1)

Edit, though this one is easy: test_segment_reductions were never run in OSS :/

@malfet malfet self-assigned this Jun 29, 2023
@malfet malfet added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed high priority triage review labels Jun 29, 2023
@malfet
Copy link
Contributor Author

malfet commented Jun 29, 2023

Actually, just a benign case of not running CPU tests on GPU (I.e. that one must not have @onlyCPU decorator to expose the issue)

malfet added a commit that referenced this issue Jun 29, 2023
Remove `test_segment_reductions` from list of blocklisted tests
Remove `@onlyCPU` qualifier from test_segment_reductions as it has CUDA
specific parts

Fixes #104410
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: ci Related to continuous integration triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

1 participant