Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Support clamp_min and clamp_max #37872

Closed
wants to merge 1 commit into from

Conversation

@BowenBao
Copy link
Collaborator

@BowenBao BowenBao commented May 5, 2020

clamp_min is used in torch.nn.functional.normalize. Update symbolic_opset11 to support with updated clip in onnx opset 11.

@@ -1515,6 +1515,15 @@ def forward(self, x, k):
k = torch.tensor(3)
self.run_test(MyModuleDynamic(), [x, k])

@skipIfUnsupportedOpsetVersion([7, 12])
Copy link
Collaborator

@neginraoof neginraoof May 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is opset 12 skipped?

Loading

Copy link
Collaborator

@neginraoof neginraoof May 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we can enable this after ORT version is updated?

Loading

Copy link
Member

@houseroad houseroad left a comment

Thanks

Loading

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Loading

@facebook-github-bot
Copy link
Contributor

@facebook-github-bot facebook-github-bot commented May 7, 2020

@houseroad merged this pull request in 7be9796.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

6 participants