New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add type informations to torch.cuda #47134
Conversation
Hi @guilhermeleobas! Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
💊 CI failures summary and remediationsAs of commit 22bd4c1 (more details on the Dr. CI page):
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages: pytorch_linux_xenial_py3_6_gcc5_4_test (1/2)Step: "Run tests" (full log | diagnosis details | 🔁 rerun)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @guilhermeleobas. The one CI failure that needed a closer look is test_abs_cuda_complex128
for ROCm:
======================================================================
FAIL: test_abs_cuda_complex128 (__main__.TestForeachCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 834, in wrapper
method(*args, **kwargs)
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 272, in instantiated_test
result = test_fn(self, *args)
File "test_foreach.py", line 259, in test_abs
self.assertEqual(res, expected)
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1138, in assertEqual
exact_dtype=exact_dtype, exact_device=exact_device)
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1112, in assertEqual
self.assertTrue(result, msg=msg)
AssertionError: False is not true : Tensors failed to compare as equal! Attempted to compare equality of tensors with different dtypes. Got dtypes torch.float64 and torch.complex128.
======================================================================
FAIL: test_abs_cuda_complex64 (__main__.TestForeachCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 834, in wrapper
method(*args, **kwargs)
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 272, in instantiated_test
result = test_fn(self, *args)
File "test_foreach.py", line 252, in test_abs
self.assertEqual(res, expected)
File "/var/lib/jenkins/.local/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1138, in assertEqual
exact_dtype=exact_dtype, exact_device=exact_device
It looks like a flake; that job has a different failure in many other PRs, but they're different each time.
Maybe just push a code comment update for my one other comment to check it's not the same failure twice?
LGTM other than that.
torch/cuda/__init__.py
Outdated
with device(self.get_device()): | ||
return super(_CudaBase, self).type(*args, **kwargs) | ||
# We could use a Protocol here to tell mypy that self has `get_device` method | ||
# but it is only available on Python >= 3.8 on typing or mypy_extensions on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you mean typing_extensions
? That is a non-optional dependency already, so could be used. That said, I don't think it's necessary to spend time on this now. Just updating the comment may be good for now.
Okay, ROCm failures are different ones now - these are all flakes. |
return super(_CudaBase, self).type(*args, **kwargs) | ||
# We could use a Protocol here to tell mypy that self has `get_device` method | ||
# but it is only available in the typing module on Python >= 3.8 | ||
# or on typing_extensions module on Python >= 3.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw we py3.6+ only now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Fixes #47133