Skip to content

Conversation

@kimishpatel
Copy link
Contributor

@kimishpatel kimishpatel commented May 12, 2020

Stack from ghstack:

Summary:
Given qtensor stores scale as double, this mismatch can cause use to
repack weights everytime in QNNPACK. Worse given that we release
original weights runtime can crash.

Test Plan:
pytest test/quantization/test_quantized_module.py::TestStaticQuantizedModule::test_conv2d_api

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D21529384

Summary:
Given qtensor stores scale as double, this mismatch can cause use to
repack weights everytime in QNNPACK. Worse given that we release
original weights runtime can crash.

Test Plan:
pytest test/quantization/test_quantized_module.py::TestStaticQuantizedModule::test_conv2d_api

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented May 12, 2020

💊 CI failures summary and remediations

As of commit 46a3f35 (more details on the Dr. CI page):


  • 3/3 failures possibly* introduced in this PR
    • 2/3 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 12 21:06:24 [E request_callback_impl.cpp:99] Received error while processing request type 15: size mismatch, m1: [3 x 3], m2: [6 x 6] at /var/lib/jenkins/workspace/aten/src/TH/generic/THTensorMath.cpp:41
May 12 21:06:20   test_context_cleanup_nested_rpc (__main__.DistAutogradTestWithSpawn) ... ok (0.813s) 
May 12 21:06:21   test_context_cleanup_no_tensors (__main__.DistAutogradTestWithSpawn) ... ok (0.813s) 
May 12 21:06:21   test_context_cleanup_tensor_no_grad (__main__.DistAutogradTestWithSpawn) ... ok (0.713s) 
May 12 21:06:22   test_context_cleanup_tensor_with_grad (__main__.DistAutogradTestWithSpawn) ... ok (0.813s) 
May 12 21:06:22   test_debug_info (__main__.DistAutogradTestWithSpawn) ... skip (0.003s) 
May 12 21:06:23   test_dist_autograd_profiling (__main__.DistAutogradTestWithSpawn) ... ok (0.914s) 
May 12 21:06:23   test_embedding_bag_with_no_grad_tensors (__main__.DistAutogradTestWithSpawn) ... skip (0.002s) 
May 12 21:06:24   test_error_in_context (__main__.DistAutogradTestWithSpawn) ... [E request_callback_impl.cpp:99] Received error while processing request type 15: size mismatch, m1: [3 x 3], m2: [6 x 6] at /var/lib/jenkins/workspace/aten/src/TH/generic/THTensorMath.cpp:41 
May 12 21:06:24 [E request_callback_impl.cpp:99] Received error while processing request type 15: size mismatch, m1: [3 x 3], m2: [6 x 6] at /var/lib/jenkins/workspace/aten/src/TH/generic/THTensorMath.cpp:41 
May 12 21:06:24 [E request_callback_impl.cpp:99] Received error while processing request type 15: size mismatch, m1: [3 x 3], m2: [6 x 6] at /var/lib/jenkins/workspace/aten/src/TH/generic/THTensorMath.cpp:41 
May 12 21:06:24 [E request_callback_impl.cpp:99] Received error while processing request type 15: size mismatch, m1: [3 x 3], m2: [6 x 6] at /var/lib/jenkins/workspace/aten/src/TH/generic/THTensorMath.cpp:41 
May 12 21:06:24 ok (0.713s) 
May 12 21:06:25   test_grad_copy_sparse_indices_extra_ref (__main__.DistAutogradTestWithSpawn) ... [W pybind_utils.h:798] Warning: Using sparse tensors in TorchScript is experimental. Many optimization pathways have not been thoroughly tested with sparse tensors. Please include the fact that the network is running sparse tensors in any bug reports submitted. (function operator()) 
May 12 21:06:25 [W pybind_utils.h:798] Warning: Using sparse tensors in TorchScript is experimental. Many optimization pathways have not been thoroughly tested with sparse tensors. Please include the fact that the network is running sparse tensors in any bug reports submitted. (function operator()) 
May 12 21:06:25 [W pybind_utils.h:798] Warning: Using sparse tensors in TorchScript is experimental. Many optimization pathways have not been thoroughly tested with sparse tensors. Please include the fact that the network is running sparse tensors in any bug reports submitted. (function operator()) 
May 12 21:06:25 [W pybind_utils.h:798] Warning: Using sparse tensors in TorchScript is experimental. Many optimization pathways have not been thoroughly tested with sparse tensors. Please include the fact that the network is running sparse tensors in any bug reports submitted. (function operator()) 
May 12 21:06:25 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1909: UserWarning: Argument order of nn.functional.embedding_bag was changed. Usage `embedding_bag(weight, input, ...)` is deprecated, and should now be `embedding_bag(input, weight, ...)`. 
May 12 21:06:25   warnings.warn("Argument order of nn.functional.embedding_bag was changed. " 
May 12 21:06:25 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1909: UserWarning: Argument order of nn.functional.embedding_bag was changed. Usage `embedding_bag(weight, input, ...)` is deprecated, and should now be `embedding_bag(input, weight, ...)`. 
May 12 21:06:25   warnings.warn("Argument order of nn.functional.embedding_bag was changed. " 
May 12 21:06:25 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1909: UserWarning: Argument order of nn.functional.embedding_bag was changed. Usage `embedding_bag(weight, input, ...)` is deprecated, and should now be `embedding_bag(input, weight, ...)`. 

ci.pytorch.org: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 2 times.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 2c88141.

@facebook-github-bot facebook-github-bot deleted the gh/kimishpatel/26/head branch May 16, 2020 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants