Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix dict iteration error in fuse_known_modules #69159

Closed

Conversation

bfineran
Copy link

Fixes #69158

@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/bfineran/pytorch/blob/f94c0d6d442527564ebfc71cea98ed67ae8fe2ee/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

Hi @bfineran!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Nov 30, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f94c0d6 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-cuda11.5-py3.6-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-11-30T23:51:16.1426706Z FAIL [5.216s]: test_fs_pool (__main__.TestMultiprocessing)
2021-11-30T23:51:13.1484466Z   test_non_leaf_variable_sharing (__main__.TestMultiprocessing) ... ok (0.003s)
2021-11-30T23:51:13.1695928Z   test_parameter_sharing (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.6/site-packages/torch/utils/hooks.py:62: UserWarning: backward hook <function TestMultiprocessing._test_autograd_sharing.<locals>.hook at 0x7f4407efe950> on tensor will not be serialized.  If this is expected, you can decorate the function with @torch.utils.hooks.unserializable_hook to suppress this warning
2021-11-30T23:51:13.1698117Z   "to suppress this warning".format(repr(hook)))
2021-11-30T23:51:13.1915551Z ok (0.043s)
2021-11-30T23:51:13.2104955Z   test_variable_sharing (__main__.TestMultiprocessing) ... /opt/conda/lib/python3.6/site-packages/torch/utils/hooks.py:62: UserWarning: backward hook <function TestMultiprocessing._test_autograd_sharing.<locals>.hook at 0x7f4407efe9d8> on tensor will not be serialized.  If this is expected, you can decorate the function with @torch.utils.hooks.unserializable_hook to suppress this warning
2021-11-30T23:51:13.2107345Z   "to suppress this warning".format(repr(hook)))
2021-11-30T23:51:13.2702407Z ok (0.078s)
2021-11-30T23:51:16.1421453Z   test_wrong_cuda_fork (__main__.TestMultiprocessing) ... ok (2.872s)
2021-11-30T23:51:16.1422348Z 
2021-11-30T23:51:16.1422948Z ======================================================================
2021-11-30T23:51:16.1426706Z FAIL [5.216s]: test_fs_pool (__main__.TestMultiprocessing)
2021-11-30T23:51:16.1428899Z ----------------------------------------------------------------------
2021-11-30T23:51:16.1429905Z Traceback (most recent call last):
2021-11-30T23:51:16.1430575Z   File "test_multiprocessing.py", line 355, in test_fs_pool
2021-11-30T23:51:16.1432140Z     self._test_pool(repeat=TEST_REPEATS)
2021-11-30T23:51:16.1433114Z   File "test_multiprocessing.py", line 327, in _test_pool
2021-11-30T23:51:16.1434282Z     do_test()
2021-11-30T23:51:16.1434857Z   File "test_multiprocessing.py", line 206, in __exit__
2021-11-30T23:51:16.1436246Z     self.test_case.assertFalse(self.has_shm_files())
2021-11-30T23:51:16.1437136Z AssertionError: True is not false
2021-11-30T23:51:16.1437514Z 

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@samdow samdow requested a review from z-a-f November 30, 2021 22:00
@samdow samdow added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Nov 30, 2021
@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@bfineran
Copy link
Author

bfineran commented Jan 6, 2022

hi @z-a-f any update on this?

@jerryzh168
Copy link
Contributor

Can you add a test after https://github.com/pytorch/pytorch/blob/master/test/quantization/eager/test_quantize_eager_ptq.py#L520 with example in #69158 and show that this PR fixes your issue
you can run the test with python test/test_quantization.py TestClass.test_case

@jerryzh168 jerryzh168 self-requested a review January 7, 2022 22:38
@jerryzh168
Copy link
Contributor

could you rebase on master?

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 21, 2022
@github-actions github-actions bot closed this Jun 21, 2022
@z-a-f
Copy link
Contributor

z-a-f commented Oct 3, 2022

@bfineran This PR was not landed. Is there a plan to do it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed open source Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
7 participants