Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add typing annotations for torch.nn.quantized.dynamic.modules.rnn #43186

Closed
wants to merge 2 commits into from
Closed

Add typing annotations for torch.nn.quantized.dynamic.modules.rnn #43186

wants to merge 2 commits into from

Conversation

guilhermeleobas
Copy link
Collaborator

@guilhermeleobas guilhermeleobas commented Aug 18, 2020

Fixes #43185

xref: gh-43072

@guilhermeleobas guilhermeleobas added the module: typing Related to mypy type annotations label Aug 18, 2020
@guilhermeleobas guilhermeleobas self-assigned this Aug 18, 2020
@dr-ci
Copy link

dr-ci bot commented Aug 18, 2020

💊 CI failures summary and remediations

As of commit c5939ad (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)

Extra GitHub checks: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 71 times.

@mrshenli mrshenli added oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Aug 18, 2020
@hameerabbasi
Copy link
Collaborator

hameerabbasi commented Aug 20, 2020

Probably rebase or merge viable/strict.

@guilhermeleobas
Copy link
Collaborator Author

It seems that the failures were introduced in this PR. I can't reproduce because it requires fbgemm:

Command to reproduce test failure:

$ pytest test/test_quantization.py -k 'test_quantized_rnn' -sv -rs

platform linux -- Python 3.8.4, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/guilhermel/miniconda3/envs/pytorch-cuda-dev/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/work/guilhermel/git/Quansight/pytorch/.hypothesis/examples')
rootdir: /work/guilhermel/git/Quansight/pytorch
plugins: hypothesis-5.20.3
collected 323 items / 321 deselected / 2 selected

test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn SKIPPED
test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn_cell SKIPPED

======================================================================================================== short test summary info ========================================================================================================
SKIPPED [1] test/quantization/test_quantize.py:737: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer.
SKIPPED [1] test/quantization/test_quantize.py:778: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer.

@hameerabbasi
Copy link
Collaborator

Perhaps merge viable/strict?

@guilhermeleobas
Copy link
Collaborator Author

guilhermeleobas commented Aug 26, 2020

It seems that the failures were introduced in this PR. I can't reproduce because it requires fbgemm:

Command to reproduce test failure:

$ pytest test/test_quantization.py -k 'test_quantized_rnn' -sv -rs

platform linux -- Python 3.8.4, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/guilhermel/miniconda3/envs/pytorch-cuda-dev/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/work/guilhermel/git/Quansight/pytorch/.hypothesis/examples')
rootdir: /work/guilhermel/git/Quansight/pytorch
plugins: hypothesis-5.20.3
collected 323 items / 321 deselected / 2 selected

test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn SKIPPED
test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn_cell SKIPPED

======================================================================================================== short test summary info ========================================================================================================
SKIPPED [1] test/quantization/test_quantize.py:737: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer.
SKIPPED [1] test/quantization/test_quantize.py:778: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer.

I don't think this test are related to the changes introduced in this pull request. I can reproduce the same error on master and viable/strict.

The test fails in the following CI machines

  • pytorch_linux_bionic_py3_8_gcc9_test
  • pytorch_linux_bionic_py3_6_clang9_test
  • pytorch_linux_xenial_py3_clang5_asan_test2
  • pytorch_linux_xenial_py3_6_gcc5_4_test
  • pytorch_windows_vs2019_py36_cuda10.1_test2
  • pytorch_linux_xenial_py3_6_gcc5_4_ge_config_simple_test
  • pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test

@guilhermeleobas guilhermeleobas marked this pull request as ready for review August 26, 2020 16:49
@rgommers
Copy link
Collaborator

I don't think this test are related to the changes introduced in this pull request. I can reproduce the same error on master and viable/strict.

You mean in CI on this PR? https://ezyang.github.io/pytorch-ci-hud/build/pytorch-master looks very green right now. It's likely caused by this PR.

The 11 JIT failures in pytorch_linux_bionic_py3_6_clang9_test look unrelated, but would be nice to be sure.

@rgommers
Copy link
Collaborator

You mean in CI on this PR? https://ezyang.github.io/pytorch-ci-hud/build/pytorch-master looks very green right now. It's likely caused by this PR.

I rebuilt with fbgemm support and tested this branch - the two hypothesis tests pass for me locally.

@rgommers
Copy link
Collaborator

@guilhermeleobas other PRs don't have these failures, so some more digging is needed. For starters, can you squash all your commits and rebase on current master?

@guilhermeleobas
Copy link
Collaborator Author

The error was related to pytorch not parsing # type: ignore comments correctly. PR #43446 seems to fix that. I will rebase and run CI again.

@pytorch pytorch deleted a comment from codecov bot Aug 28, 2020
@codecov
Copy link

codecov bot commented Aug 28, 2020

Codecov Report

Merging #43186 into master will decrease coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #43186      +/-   ##
==========================================
- Coverage   69.34%   69.34%   -0.01%     
==========================================
  Files         378      378              
  Lines       46698    46697       -1     
==========================================
- Hits        32381    32380       -1     
  Misses      14317    14317              
Impacted Files Coverage Δ
torch/nn/quantized/dynamic/modules/rnn.py 89.02% <100.00%> (-0.03%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 26161e8...c5939ad. Read the comment docs.

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now, thanks @guilhermeleobas

@rgommers rgommers requested a review from malfet August 28, 2020 19:59
@rgommers
Copy link
Collaborator

The two CI failures are unrelated.

@guilhermeleobas
Copy link
Collaborator Author

Thanks for the review, @rgommers and @hameerabbasi

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 63a0bb0.

@guilhermeleobas guilhermeleobas deleted the quantized-rnn branch September 4, 2020 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: typing Related to mypy type annotations oncall: quantization Quantization support in PyTorch open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable torch.nn.quantized.dynamic.modules.rnn typechecks during CI
8 participants