Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant] Fix applying non-zero offset 1 to null pointer in quantized interpolation #65570

Closed
wants to merge 1 commit into from

Conversation

z-a-f
Copy link
Contributor

@z-a-f z-a-f commented Sep 23, 2021

Summary: Although this is not an issue that could pop-up in practice, LLVM-12 throws an error about this issue if not checked.

Test Plan: buck test mode/dev //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_empty_batch (quantization.core.test_quantized_op.TestQuantizedOps)'

Reviewed By: r-barnes

Differential Revision: D31151681

…nterpolation

Summary: Although this is not an issue that could pop-up in practice, LLVM-12 throws an error about this issue if not checked.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_empty_batch (quantization.core.test_quantized_op.TestQuantizedOps)'`

Reviewed By: r-barnes

Differential Revision: D31151681

fbshipit-source-id: 2f890cf44b8918f1600c7ed7027004d867ed1153
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 23, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f04376a (more details on the Dr. CI page):



1 failure not recognized by patterns:

Job Step Action
GitHub Actions linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) Unknown 🔁 rerun

❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun) ❄️

Sep 23 22:01:44 RuntimeError: tensorflow/compil...OK() (Unknown: Could not start gRPC server vs. OK)
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/distributed/xla_multiprocessing.py", line 314, in _setup_replication
Sep 23 22:01:44     device = xm.xla_device()
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 232, in xla_device
Sep 23 22:01:44     devkind=devkind if devkind is not None else None)
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 137, in get_xla_supported_devices
Sep 23 22:01:44     xla_devices = _DEVICES.value
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/utils/utils.py", line 32, in value
Sep 23 22:01:44     self._value = self._gen_fn()
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 19, in <lambda>
Sep 23 22:01:44     _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices())
Sep 23 22:01:44 RuntimeError: tensorflow/compiler/xla/xla_client/xrt_local_service.cc:56 : Check failed: tensorflow::NewServer(server_def, &server_) == ::tensorflow::Status::OK() (Unknown: Could not start gRPC server vs. OK)
Sep 23 22:01:44 Traceback (most recent call last):
Sep 23 22:01:44   File "/var/lib/jenkins/workspace/xla/test/test_mp_save.py", line 63, in <module>
Sep 23 22:01:44     xmp.spawn(_mp_fn, args=(temp_file,))
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.10-py3.6-linux-x86_64.egg/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
Sep 23 22:01:44     start_method=start_method)
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
Sep 23 22:01:44     while not context.join():
Sep 23 22:01:44   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 144, in join
Sep 23 22:01:44     exit_code=exitcode
Sep 23 22:01:44 torch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with exit code 17

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D31151681

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants