Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Add GreaterOrEqual and LessOrEqual to opset 12 ONNX export #38311

Closed
wants to merge 4 commits into from

Conversation

@yaeldekel
Copy link
Contributor

@yaeldekel yaeldekel commented May 12, 2020

GreaterOrEqual and LessOrEqual were added in opset 12, this PR adds support to export these operators to ONNX instead of using "not" and "less than" or "greater than".

@dr-ci
Copy link

@dr-ci dr-ci bot commented May 12, 2020

💊 CI failures summary and remediations

As of commit 6f0e642 (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_test2 (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

AssertionError: Not within tolerance rtol=1.3e-06 atol=1e-05 at input[0] (33.0 vs. 11.0) and 1 other locations (100.00%)
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 979, in assertEqual 
    assertTensorsEqual(x, y) 
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 937, in assertTensorsEqual 
    atol=atol, rtol=rtol, message=message) 
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 979, in assertEqual 
    assertTensorsEqual(x, y) 
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 941, in assertTensorsEqual 
    torch.testing.assert_allclose(a, b, atol=atol, rtol=rtol, equal_nan=True, msg=message) 
  File "C:\Users\circleci\project\build\win_tmp\build\torch\testing\__init__.py", line 60, in assert_allclose 
    raise AssertionError(msg) 
AssertionError: Not within tolerance rtol=1.3e-06 atol=1e-05 at input[0] (33.0 vs. 11.0) and 1 other locations (100.00%) 
 
---------------------------------------------------------------------- 
Ran 5180 tests in 367.685s 
 
FAILED (failures=2, skipped=209) 
 
Generating XML reports... 
Generated XML report: test-reports\python-unittest\TEST-TestDevicePrecisionCUDA-20200514135948.xml 
Generated XML report: test-reports\python-unittest\TEST-TestTensorDeviceOpsCPU-20200514135948.xml 
Generated XML report: test-reports\python-unittest\TEST-TestTensorDeviceOpsCUDA-20200514135948.xml 

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 10 times.

Loading

@@ -1956,42 +1956,70 @@ def forward(self, input):
x = torch.ones(5, 6)
self.run_test(DimArange(), x)

def _test_compare_ops(self, model, num_inputs):
Copy link
Collaborator

@neginraoof neginraoof May 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the output data type in PyTorch if inputs don't have the same data type?
Edit: Inferred data type*

Loading

Copy link
Contributor Author

@yaeldekel yaeldekel May 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output type is always bool, regardless of the input types.


In reply to: 424530939 [](ancestors = 424530939)

Loading

@@ -1956,42 +1956,70 @@ def forward(self, input):
x = torch.ones(5, 6)
self.run_test(DimArange(), x)

def _test_compare_ops(self, model, num_inputs):
Copy link
Collaborator

@neginraoof neginraoof May 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a couple more tests for combination of input types (like float and int)?

Loading

Copy link
Collaborator

@neginraoof neginraoof left a comment

Thanks!

Loading

@neginraoof
Copy link
Collaborator

@neginraoof neginraoof commented May 14, 2020

cc @houseroad for review. Thanks.

Loading

Copy link
Member

@houseroad houseroad left a comment

Thanks

Loading

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Loading

@facebook-github-bot
Copy link
Contributor

@facebook-github-bot facebook-github-bot commented May 18, 2020

@houseroad merged this pull request in ece878e.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

7 participants