Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Add checks in ONNXSetDynamicInputShape #49783

Merged
merged 4 commits into from
Jan 4, 2021

Conversation

jiafatom
Copy link
Contributor

@jiafatom jiafatom commented Dec 23, 2020

(1) This PR adds a exception check for ONNXSetDynamicInputShape.
(2) Import torch vision CI tests to pytorch CI tests, then we can monitor the test failure while submitting PRs.

This PR is originally a duplicate one from here
We need to merge towards pytorch:onnx_ms_1 now.

#49366 has some merging issue, duplicate here.

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Dec 23, 2020
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 23, 2020

💊 CI failures summary and remediations

As of commit 776f888 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Dec 23 20:31:52 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Dec 23 20:31:52 processing existing schema:  gather(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  scatter(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor[] _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  reduce_scatter(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 23 20:31:52 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0)
Dec 23 20:31:52 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
Dec 23 20:31:52 
Dec 23 20:31:52 Broken ops: [
Dec 23 20:31:52 	aten::pixel_unshuffle(Tensor self, int downscale_factor) -> (Tensor)
Dec 23 20:31:52 	aten::xlogy_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!))
Dec 23 20:31:52 	aten::xlogy_.Scalar_Other(Tensor(a!) self, Scalar other) -> (Tensor(a!))
Dec 23 20:31:52 	aten::xlogy.Tensor(Tensor self, Tensor other) -> (Tensor)
Dec 23 20:31:52 	aten::xlogy.OutTensor(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!))
Dec 23 20:31:52 	aten::xlogy.Scalar_Self(Scalar self, Tensor other) -> (Tensor)
Dec 23 20:31:52 	aten::xlogy.OutScalar_Self(Scalar self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!))
Dec 23 20:31:52 	aten::xlogy.Scalar_Other(Tensor self, Scalar other) -> (Tensor)

1 failure not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_bionic_py3_8_gcc9_coverage_test1 Run tests 🔁 rerun
1 job timed out:
  • pytorch_linux_bionic_py3_8_gcc9_coverage_test1

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 12 times.

@jiafatom jiafatom changed the base branch from master to onnx_ms_1 December 23, 2020 04:03
@BowenBao BowenBao merged commit ab060a1 into pytorch:onnx_ms_1 Jan 4, 2021
BowenBao pushed a commit that referenced this pull request Jan 4, 2021
* [ONNX] Add checks in ONNXSetDynamicInputShape

* [ONNX] Add checks in ONNXSetDynamicInputShape
spandantiwari pushed a commit to spandantiwari/pytorch that referenced this pull request Jan 8, 2021
* [ONNX] Add checks in ONNXSetDynamicInputShape

* [ONNX] Add checks in ONNXSetDynamicInputShape
facebook-github-bot pushed a commit that referenced this pull request Jan 13, 2021
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (#49270)
- Symbolic function for torch.square (#49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (#49783) …
- [ONNX] Enable export af aten::__derive_index (#49514) …
- [ONNX] Update symbolic for unfold (#49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (#49798)
- [ONNX] Enable opset 13 ops (#49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (#50119)
- [ONNX] Add a post-pass for If folding (#49410)

Pull Request resolved: #50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants