New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add typing annotations for torch.nn.quantized.dynamic.modules.rnn #43186
Conversation
💊 CI failures summary and remediationsAs of commit c5939ad (more details on the Dr. CI page):
Extra GitHub checks: 2 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 71 times. |
Probably rebase or merge |
It seems that the failures were introduced in this PR. I can't reproduce because it requires Command to reproduce test failure: $ pytest test/test_quantization.py -k 'test_quantized_rnn' -sv -rs
platform linux -- Python 3.8.4, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/guilhermel/miniconda3/envs/pytorch-cuda-dev/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/work/guilhermel/git/Quansight/pytorch/.hypothesis/examples')
rootdir: /work/guilhermel/git/Quansight/pytorch
plugins: hypothesis-5.20.3
collected 323 items / 321 deselected / 2 selected
test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn SKIPPED
test/test_quantization.py::TestPostTrainingDynamic::test_quantized_rnn_cell SKIPPED
======================================================================================================== short test summary info ========================================================================================================
SKIPPED [1] test/quantization/test_quantize.py:737: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer.
SKIPPED [1] test/quantization/test_quantize.py:778: Quantized operations require FBGEMM. FBGEMM is only optimized for CPUs with instruction set support AVX2 or newer. |
Perhaps merge |
I don't think this test are related to the changes introduced in this pull request. I can reproduce the same error on The test fails in the following CI machines
|
You mean in CI on this PR? https://ezyang.github.io/pytorch-ci-hud/build/pytorch-master looks very green right now. It's likely caused by this PR. The 11 JIT failures in |
I rebuilt with fbgemm support and tested this branch - the two hypothesis tests pass for me locally. |
@guilhermeleobas other PRs don't have these failures, so some more digging is needed. For starters, can you squash all your commits and rebase on current master? |
The error was related to pytorch not parsing |
Codecov Report
@@ Coverage Diff @@
## master #43186 +/- ##
==========================================
- Coverage 69.34% 69.34% -0.01%
==========================================
Files 378 378
Lines 46698 46697 -1
==========================================
- Hits 32381 32380 -1
Misses 14317 14317
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now, thanks @guilhermeleobas
The two CI failures are unrelated. |
Thanks for the review, @rgommers and @hameerabbasi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Fixes #43185
xref: gh-43072