Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Support aten::unflatten in torchscript exporter #99056

Closed
wants to merge 3 commits into from

Conversation

titaiwangms
Copy link
Collaborator

@titaiwangms titaiwangms commented Apr 13, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 13, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/99056

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit fa9dae0:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label Apr 13, 2023
titaiwangms added a commit that referenced this pull request Apr 13, 2023
ghstack-source-id: 645eb9a46987c3ed60b9fd3045b5d3fdf91bfca3
Pull Request resolved: #99056
@titaiwangms titaiwangms added module: onnx Related to torch.onnx topic: new features topic category labels Apr 13, 2023
@titaiwangms titaiwangms changed the title [ONNX] Support aten::unflatten in torchscript evaluator [ONNX] Support aten::unflatten in torchscript exporter Apr 13, 2023
titaiwangms added a commit that referenced this pull request Apr 13, 2023
ghstack-source-id: 3f90687b2307ecc8d769577735ac09923755026d
Pull Request resolved: #99056
@titaiwangms titaiwangms added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 13, 2023
@@ -462,6 +463,50 @@ def reduce_dim(g, self, dim, keepdim, dtype):
return reduce


@_onnx_symbolic("aten::unflatten")
@_beartype.beartype
def unflatten(g: jit_utils.GraphContext, input, dim, unflattened_size):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment why this requires minimum opset 13?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to put this in Opset9, and turns out ops like Concat, Slice, .., has moved attrs to inputs, and that complicates things in a way that we have to engage helper functions. By demand, opset 13 should be enough for current requests. I will put a comment here.

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@@ -462,6 +463,50 @@ def reduce_dim(g, self, dim, keepdim, dtype):
return reduce


@_onnx_symbolic("aten::unflatten")
@_beartype.beartype
def unflatten(g: jit_utils.GraphContext, input, dim, unflattened_size):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinchuby @titaiwangms Should we mention something like "implementation ported from torchlib + link"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m fine either way. Any considerations?

Copy link
Collaborator

@BowenBao BowenBao Apr 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a random thought, mostly for awareness. But it could get outdated 🤷

Maybe adding like "... originally ported ..." reduces the harm of getting outdated, and surfaces that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment referencing a link to onnx Function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment with link and function name to point out the source.

@@ -330,6 +332,7 @@ def reason_flaky() -> str:
fixme("ceil", dtypes=[torch.float64], reason=reason_onnx_runtime_does_not_support("Ceil", ["f64"])),
dont_care("sqrt", dtypes=BOOL_TYPES, reason=reason_onnx_does_not_support("Sqrt")),
dont_care("stft", opsets=[opsets_before(17)], reason=reason_onnx_does_not_support("STFT")),
dont_care("unflatten", opsets=[opsets_before(13)], reason=reason_onnx_does_not_support("Unflatten")),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the reason accurate? I don’t think there is an Unflatten op in onnx?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with fixme. The unsupport reason is only that using the helper function would have to rewrite the structure. fixme should be sufficient as a starting point if one needs the op in legacy opset version.

titaiwangms added a commit that referenced this pull request Apr 13, 2023
ghstack-source-id: 0bca1967cc0d035e8ae6d579305211c92923fd52
Pull Request resolved: #99056
@titaiwangms
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged merging module: onnx Related to torch.onnx open source release notes: onnx torch.onnx related changes that should show up in the release notes topic: new features topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants