Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Enable opset 13 ops #49612

Merged
merged 31 commits into from
Jan 6, 2021
Merged

Conversation

neginraoof
Copy link
Contributor

Duplicate #46903

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 18, 2020

💊 CI failures summary and remediations

As of commit 70aa5c4 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jan 05 22:23:41 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Jan 05 22:23:41 processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 22:23:41 processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 22:23:41 processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 22:23:41 processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 22:23:41 processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 22:23:41 processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (None _0)
Jan 05 22:23:41 processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jan 05 22:23:41 processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jan 05 22:23:41 processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
Jan 05 22:23:41 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0)
Jan 05 22:23:41 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
Jan 05 22:23:41 
Jan 05 22:23:41 Broken ops: [
Jan 05 22:23:41 	aten::_test_ambiguous_defaults.a(Tensor dummy, int a=1, int b=1) -> (Tensor)
Jan 05 22:23:41 	aten::_test_ambiguous_defaults.b(Tensor dummy, int a=2, str b="2") -> (Tensor)
Jan 05 22:23:41 ]
Jan 05 22:23:41 + cleanup
Jan 05 22:23:41 + retcode=1
Jan 05 22:23:41 + set +x
Jan 05 22:23:41 =================== sccache compilation log ===================
Jan 05 22:23:41 =========== If your build fails, please take a look at the log above for possible reasons ===========

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (2/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jan 05 22:54:21 what(): boxed_kernel_func_ == nullptr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h":220, please report a bug to PyTorch. Tried to set a manually boxed kernel for a kernel that already has a boxed kernel set.
Jan 05 22:54:18 + '[' /tmp/pytorch_py_test.log '!=' '' ']'
Jan 05 22:54:18 + run_all_tests
Jan 05 22:54:18 + tee /tmp/pytorch_py_test.log
Jan 05 22:54:18 + run_dynamic python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 05 22:54:18 + echo 'Running in DynamicShape mode: python3' /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 05 22:54:18 Running in DynamicShape mode: python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 05 22:54:18 + XLA_EXPERIMENTAL=nonzero:masked_select
Jan 05 22:54:18 + run_test python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 05 22:54:18 + python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 05 22:54:21 terminate called after throwing an instance of 'c10::Error'
Jan 05 22:54:21   what():  boxed_kernel_func_ == nullptr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h":220, please report a bug to PyTorch. Tried to set a manually boxed kernel for a kernel that already has a boxed kernel set.
Jan 05 22:54:21 Exception raised from setManuallyBoxedKernel_ at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:220 (most recent call first):
Jan 05 22:54:21 frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x7d (0x7f3edbf0a31d in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
Jan 05 22:54:21 frame #1: <unknown function> + 0xf41ee0 (0x7f3edd070ee0 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 05 22:54:21 frame #2: c10::impl::OperatorEntry::registerKernel(c10::Dispatcher const&, c10::optional<c10::DispatchKey>, c10::KernelFunction, c10::optional<c10::impl::CppSignature>, std::unique_ptr<c10::FunctionSchema, std::default_delete<c10::FunctionSchema> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x5eb (0x7f3edd06dddb in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 05 22:54:21 frame #3: c10::Dispatcher::registerImpl(c10::OperatorName, c10::optional<c10::DispatchKey>, c10::KernelFunction, c10::optional<c10::impl::CppSignature>, std::unique_ptr<c10::FunctionSchema, std::default_delete<c10::FunctionSchema> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x134 (0x7f3edd064f54 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 05 22:54:21 frame #4: torch::Library::_impl(char const*, torch::CppFunction&&) & + 0x439 (0x7f3edd09b739 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 05 22:54:21 frame #5: torch::Library& torch::Library::impl<char const*, at::Tensor (*)(at::Tensor const&, at::Tensor const&)>(char const*, at::Tensor (*&&)(at::Tensor const&, at::Tensor const&)) & + 0x64 (0x7f3ecec41e24 in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 05 22:54:21 frame #6: <unknown function> + 0x68e4cf (0x7f3ecec334cf in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 05 22:54:21 frame #7: torch::detail::TorchLibraryInit::TorchLibraryInit(torch::Library::Kind, void (*)(torch::Library&), char const*, c10::optional<c10::DispatchKey>, char const*, unsigned int) + 0xdb (0x7f3ecec4058b in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 05 22:54:21 frame #8: <unknown function> + 0x223571 (0x7f3ece7c8571 in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 88 times.

@facebook-github-bot facebook-github-bot added oncall: distributed Add this issue/PR to distributed oncall triage queue fx labels Dec 18, 2020
@neginraoof neginraoof removed oncall: jit Add this issue/PR to JIT oncall triage queue open source labels Dec 18, 2020
Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! (please do a rebase)

…eraoof/opset13

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	torch/onnx/symbolic_helper.py
#	torch/onnx/symbolic_opset13.py
@neginraoof neginraoof merged commit 616da7c into pytorch:onnx_ms_1 Jan 6, 2021
spandantiwari pushed a commit to spandantiwari/pytorch that referenced this pull request Jan 8, 2021
* Enable opset 13 ORT tests

* Update test.sh

* Set environ var

* Update test.sh

* Enabling more ops for opset 13

* change master to main

* Update symbolic_opset13.py

* Flake 8 fix

* [ONNX] Support onnx if/loop sequence output in opset 13 - (pytorch#49270)

* Symbolic function for torch.square (pytorch#49446)

* Clean up tests

* Exclude more tests

* Trigge build

* [ONNX] Support onnx if/loop sequence output in opset 13 - (pytorch#49270)

* Symbolic function for torch.square (pytorch#49446)

* update ORT version

* disable more tests

* clean up

* flake8

* Disable TV tests

* Update test_pytorch_onnx_onnxruntime.py

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Co-authored-by: David Fan <30608893+jiafatom@users.noreply.github.com>
facebook-github-bot pushed a commit that referenced this pull request Jan 13, 2021
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (#49270)
- Symbolic function for torch.square (#49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (#49783) …
- [ONNX] Enable export af aten::__derive_index (#49514) …
- [ONNX] Update symbolic for unfold (#49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (#49798)
- [ONNX] Enable opset 13 ops (#49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (#50119)
- [ONNX] Add a post-pass for If folding (#49410)

Pull Request resolved: #50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants