Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

annotate a few torch.nn.modules.* modules #45772

Closed
wants to merge 7 commits into from
Closed

annotate a few torch.nn.modules.* modules #45772

wants to merge 7 commits into from

Conversation

guilhermeleobas
Copy link
Collaborator

Fixes #45771

@guilhermeleobas guilhermeleobas added the module: typing Related to mypy type annotations label Oct 2, 2020
@guilhermeleobas guilhermeleobas self-assigned this Oct 2, 2020
@rgommers rgommers removed the request for review from apaszke October 2, 2020 22:08
@albanD albanD added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 6, 2020
@albanD
Copy link
Collaborator

albanD commented Oct 6, 2020

@rgommers you can ping me when this is ready for merge.

@guilhermeleobas
Copy link
Collaborator Author

Current failures doesn't seem to be related to any changes introduced in this PR. I will rebase with master

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 827, in wrapper
    method(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 273, in instantiated_test
    result = test_fn(self, *args)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 508, in dep_fn
    return fn(slf, device, *args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 508, in dep_fn
    return fn(slf, device, *args, **kwargs)
  File "test_torch.py", line 20843, in test_svd_square
    self._test_svd_helper((10, 10), True, False, device, dtype)
  File "test_torch.py", line 20828, in _test_svd_helper
    device_result = torch.svd(device_tensor, some=some)
RuntimeError: "svd_cuda" not implemented for 'ComplexDouble'

@dr-ci
Copy link

dr-ci bot commented Oct 16, 2020

💊 CI failures summary and remediations

As of commit b207772 (more details on the Dr. CI page):


  • 3/3 failures introduced in this PR

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_clang5_asan_build (1/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Oct 30 18:44:39 sccache: error: couldn't connect to server
Oct 30 18:44:39 +++ eval 'extract_trap_cmd ' 
Oct 30 18:44:39 ++++ extract_trap_cmd 
Oct 30 18:44:39 ++++ printf '%s\n' '' 
Oct 30 18:44:39 +++ printf '%s\n' cleanup 
Oct 30 18:44:39 ++ trap -- ' 
Oct 30 18:44:39 cleanup' EXIT 
Oct 30 18:44:39 ++ [[ pytorch-linux-xenial-py3-clang5-asan-build != *pytorch-win-* ]] 
Oct 30 18:44:39 ++ which sccache 
Oct 30 18:44:39 ++ sccache --stop-server 
Oct 30 18:44:39 Stopping sccache server... 
Oct 30 18:44:39 sccache: error: couldn't connect to server 
Oct 30 18:44:39 sccache: caused by: Connection refused (os error 111) 
Oct 30 18:44:39 ++ true 
Oct 30 18:44:39 ++ rm /var/lib/jenkins/sccache_error.log 
Oct 30 18:44:39 rm: cannot remove '/var/lib/jenkins/sccache_error.log': No such file or directory 
Oct 30 18:44:39 ++ true 
Oct 30 18:44:39 ++ [[ pytorch-linux-xenial-py3-clang5-asan-build == *rocm* ]] 
Oct 30 18:44:39 ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 
Oct 30 18:44:39 ++ SCCACHE_IDLE_TIMEOUT=1200 
Oct 30 18:44:39 ++ RUST_LOG=sccache::server=error 
Oct 30 18:44:39 ++ sccache --start-server 

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (2/3)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Oct 30 20:04:41 FAIL [0.790s]: test_muldiv_scalar_xla_bfloat16 (__main__.TestTorchDeviceTypeXLA)
Oct 30 20:03:40   test_where_scalar_valid_combination_xla_float16 (__main__.TestTorchDeviceTypeXLA) ... skip (0.002s) 
Oct 30 20:03:43   test_where_scalar_valid_combination_xla_float32 (__main__.TestTorchDeviceTypeXLA) ... ok (2.848s) 
Oct 30 20:04:37   test_where_scalar_valid_combination_xla_float64 (__main__.TestTorchDeviceTypeXLA) ... ok (54.645s) 
Oct 30 20:04:38   test_where_scalar_valid_combination_xla_int16 (__main__.TestTorchDeviceTypeXLA) ... ok (0.518s) 
Oct 30 20:04:38   test_where_scalar_valid_combination_xla_int32 (__main__.TestTorchDeviceTypeXLA) ... ok (0.468s) 
Oct 30 20:04:40   test_where_scalar_valid_combination_xla_int64 (__main__.TestTorchDeviceTypeXLA) ... ok (1.470s) 
Oct 30 20:04:40   test_where_scalar_valid_combination_xla_int8 (__main__.TestTorchDeviceTypeXLA) ... ok (0.507s) 
Oct 30 20:04:41   test_where_scalar_valid_combination_xla_uint8 (__main__.TestTorchDeviceTypeXLA) ... ok (0.467s) 
Oct 30 20:04:41  
Oct 30 20:04:41 ====================================================================== 
Oct 30 20:04:41 FAIL [0.790s]: test_muldiv_scalar_xla_bfloat16 (__main__.TestTorchDeviceTypeXLA) 
Oct 30 20:04:41 ---------------------------------------------------------------------- 
Oct 30 20:04:41 Traceback (most recent call last): 
Oct 30 20:04:41   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 274, in instantiated_test 
Oct 30 20:04:41     result = test_fn(self, *args) 
Oct 30 20:04:41   File "/var/lib/jenkins/workspace/xla/test/../../test/test_torch.py", line 19475, in test_muldiv_scalar 
Oct 30 20:04:41     self.assertEqual(s / x, y / x) 
Oct 30 20:04:41   File "/var/lib/jenkins/workspace/xla/test/pytorch_test_base.py", line 551, in assertEqual 
Oct 30 20:04:41     return DeviceTypeTestBase.assertEqual(self, x, y, *args, **kwargs) 
Oct 30 20:04:41   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1145, in assertEqual 
Oct 30 20:04:41     self.assertTrue(result, msg=msg) 

See CircleCI build pytorch_linux_xenial_py3_clang7_onnx_build (3/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Oct 30 18:44:41 sccache: error: couldn't connect to server
Oct 30 18:44:41 +++ eval 'extract_trap_cmd ' 
Oct 30 18:44:41 ++++ extract_trap_cmd 
Oct 30 18:44:41 ++++ printf '%s\n' '' 
Oct 30 18:44:41 +++ printf '%s\n' cleanup 
Oct 30 18:44:41 ++ trap -- ' 
Oct 30 18:44:41 cleanup' EXIT 
Oct 30 18:44:41 ++ [[ pytorch-linux-xenial-py3-clang7-onnx-build != *pytorch-win-* ]] 
Oct 30 18:44:41 ++ which sccache 
Oct 30 18:44:41 ++ sccache --stop-server 
Oct 30 18:44:41 Stopping sccache server... 
Oct 30 18:44:41 sccache: error: couldn't connect to server 
Oct 30 18:44:41 sccache: caused by: Connection refused (os error 111) 
Oct 30 18:44:41 ++ true 
Oct 30 18:44:41 ++ rm /var/lib/jenkins/sccache_error.log 
Oct 30 18:44:41 rm: cannot remove '/var/lib/jenkins/sccache_error.log': No such file or directory 
Oct 30 18:44:41 ++ true 
Oct 30 18:44:41 ++ [[ pytorch-linux-xenial-py3-clang7-onnx-build == *rocm* ]] 
Oct 30 18:44:41 ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 
Oct 30 18:44:41 ++ SCCACHE_IDLE_TIMEOUT=1200 
Oct 30 18:44:41 ++ RUST_LOG=sccache::server=error 
Oct 30 18:44:41 ++ sccache --start-server 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 15 times.

@guilhermeleobas
Copy link
Collaborator Author

CI failure is not related to any changes introduced in this PR.

Traceback (most recent call last):
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_quantized.py", line 122, in test_fn
    qfunction(*args, **kwargs)
  File "C:\Users\circleci\project\test\quantization\test_quantize_jit.py", line 3033, in test_linear
    "quantized::linear_dynamic", tracing=tracing, dynamic=True)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_quantization.py", line 499, in checkGraphModeOp
    models[d] = quantize_dynamic_jit(model, qconfig_dict, debug=d)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\quantization\quantize_jit.py", line 207, in quantize_dynamic_jit
    return _quantize_jit(model, qconfig_dict, inplace=inplace, debug=debug, quant_type=QuantType.DYNAMIC)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\quantization\quantize_jit.py", line 105, in _quantize_jit
    model = convert_dynamic_jit(model, True, debug)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\quantization\quantize_jit.py", line 98, in convert_dynamic_jit
    return _convert_jit(model, inplace, debug, quant_type=QuantType.DYNAMIC, preserved_attrs=preserved_attrs)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\quantization\quantize_jit.py", line 78, in _convert_jit
    model_c = torch._C._jit_pass_insert_quant_dequant(model_c, 'forward', inplace, debug, quant_type)
RuntimeError: 0 INTERNAL ASSERT FAILED at "..\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":461, please report a bug to PyTorch. We don't have an op for aten::quantize_per_channel but it isn't a special case.  Argument types: Tensor, float, int, int, 

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @guilhermeleobas, overall this looks good. One comment that needs a closer look / different solution.

torch/nn/modules/linear.py Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Oct 30, 2020

Codecov Report

Merging #45772 into master will decrease coverage by 0.00%.
The diff coverage is 85.00%.

@@            Coverage Diff             @@
##           master   #45772      +/-   ##
==========================================
- Coverage   68.87%   68.87%   -0.01%     
==========================================
  Files         436      436              
  Lines       56368    56376       +8     
==========================================
+ Hits        38823    38828       +5     
- Misses      17545    17548       +3     

@facebook-github-bot
Copy link
Contributor

Hi @guilhermeleobas!

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now, thanks @guilhermeleobas. CI failures are unrelated.

@albanD could you take it from here?

@rgommers rgommers requested a review from albanD October 31, 2020 12:25
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rgommers I can take it yes.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@albanD merged this pull request in 9b52654.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: typing Related to mypy type annotations open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable torch.nn.modules.* typechecks during CI
5 participants