-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unique op #1547
base: main
Are you sure you want to change the base?
Add unique op #1547
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1547 +/- ##
==========================================
- Coverage 72.28% 72.25% -0.03%
==========================================
Files 217 217
Lines 29097 29138 +41
Branches 3455 3462 +7
==========================================
+ Hits 21034 21055 +21
- Misses 6935 6953 +18
- Partials 1128 1130 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution! Could you follow the CLA bot's instruction to get that cleared?
except Exception as e: | ||
# try to provide a more informative error message | ||
if _NOT_IMPLEMENTED_UNIQUE.search(str(e)) is not None: | ||
raise NotImplementedError( | ||
f"'onnxruntime' does not yet support Unique(11) operator with dtype={self.dtype}'" | ||
) from e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove this try-catch as the function here is symbolic; we don't expect them to raise any errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in b528a6a
Yea, I may have jumped the gun a bit. Working on officially getting permission from my employer. |
@microsoft-github-policy-service agree [company="Radiance Technologies"] @microsoft-github-policy-service agree company="Radiance Technologies" |
@microsoft-github-policy-service agree company="Radiance Technologies" |
453783f
to
b528a6a
Compare
Thanks for completing the CLA. I will take a look next week |
56c06cf
to
7e6d906
Compare
Follow-up to #113118 and #124306. Developed in coordination with the solution to microsoft/onnxscript#1547 This PR adds the missing fake tensor implementation for `aten.unique_dim`, thus enabling tracing and compilation of `torch.unique` when `dim` is not None. Local testing has proceeded with the following simple script (provided that one has checked out the changes in microsoft/onnxscript#1547): ```python import onnx import onnxruntime as ort import logging import numpy as np onnx_program = torch.onnx.dynamo_export( lambda x: torch.unique(x, dim=0, return_inverse=True), torch.arange(10), export_options=torch.onnx.ExportOptions( dynamic_shapes=True, diagnostic_options=torch.onnx.DiagnosticOptions( verbosity_level=logging.DEBUG))) onnx_program.save("torch_unique.onnx") onnx_inputs = onnx_program.adapt_torch_inputs_to_onnx(torch.arange(10)) onnx_outputs = onnx_program(*onnx_inputs) loaded_onnx_program = onnx.load("torch_unique.onnx") onnx.checker.check_model(loaded_onnx_program) ort_session = ort.InferenceSession("torch_unique.onnx") inputs = np.random.randint(0, 10, 10) print(f"Inputs: {inputs}") outputs = ort_session.run(None, { "l_x_": inputs }) print(f"Outputs: {outputs}") print("Success") ``` Co-authored-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: #126561 Approved by: https://github.com/ezyang
Follow-up to pytorch#113118 and pytorch#124306. Developed in coordination with the solution to microsoft/onnxscript#1547 This PR adds the missing fake tensor implementation for `aten.unique_dim`, thus enabling tracing and compilation of `torch.unique` when `dim` is not None. Local testing has proceeded with the following simple script (provided that one has checked out the changes in microsoft/onnxscript#1547): ```python import onnx import onnxruntime as ort import logging import numpy as np onnx_program = torch.onnx.dynamo_export( lambda x: torch.unique(x, dim=0, return_inverse=True), torch.arange(10), export_options=torch.onnx.ExportOptions( dynamic_shapes=True, diagnostic_options=torch.onnx.DiagnosticOptions( verbosity_level=logging.DEBUG))) onnx_program.save("torch_unique.onnx") onnx_inputs = onnx_program.adapt_torch_inputs_to_onnx(torch.arange(10)) onnx_outputs = onnx_program(*onnx_inputs) loaded_onnx_program = onnx.load("torch_unique.onnx") onnx.checker.check_model(loaded_onnx_program) ort_session = ort.InferenceSession("torch_unique.onnx") inputs = np.random.randint(0, 10, 10) print(f"Inputs: {inputs}") outputs = ort_session.run(None, { "l_x_": inputs }) print(f"Outputs: {outputs}") print("Success") ``` Co-authored-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: pytorch#126561 Approved by: https://github.com/ezyang
Circling back around to this @justinchuby. At the time, I had been waiting for you to resolve the bug that required a hacky workaround, but I realize that might not be clear. There were also a couple of other potential unresolved bugs outside the scope of this PR, e.g., this comment. How would you like to proceed? |
Sorry for missing the clarity. I would suggest that you remove all the hacks so that the code is at its desirable state. If tests fail because of that, that’s ok. I will then go ahead to fix what’s needed. (After I’m back from vacation) |
Sounds good. FYI, the hacks were removed in b8b4cb1. As a reminder, the unit tests within If I get a chance, I'll try to rebase this PR and resolve conflicts first. |
I am implementing a CTC decoder class in pytorch -
I'm not able to export this class to onnx. Here is my error -
How can I resolve this? |
@a-gardner1 sorry for the delay. I think this PR is now good to merge (started the merge) |
Add support for exporting
torch.unique
following the conclusion of pytorch/pytorch#113118.