-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Opset 17 STFT support #83944
[ONNX] Opset 17 STFT support #83944
Conversation
[ghstack-poisoned]
🔗 Helpful links
❌ 1 New Failures, 1 Base FailuresAs of commit f304c53 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
ghstack-source-id: be5336e569ba3577994bc0f323a7502cc7c6b2d0 Pull Request resolved: #83944
/easycla As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details. This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign. |
|
Hi @justinchuby, thanks for working on the STFT support. Is the PR ready for review? |
@nateanl thanks for asking! The PR is not ready yet. I started it and got distracted. |
Thank you @justinchuby. Is there any news? It would be really great to have in in the next version of pytorch. This is something we actually need for SpeechBrain because we need the feature extraction pipeline to be ONN-exportable. |
@mravanelli A more realistic issue I realized is ONNX doesn't have the appropriate ops to handle complex <-> real matrix conversion. Since the STFT op in python operates on complex numbers and onnx STFT operates on real numbers, I will need to think harder about how this is possible. Assuming onnx does support the operator, you can always implement and register a custom symbolic function (https://pytorch.org/docs/stable/onnx.html#adding-support-for-operators) and not have to worry about pytorch release cycles. |
ghstack-source-id: be5336e569ba3577994bc0f323a7502cc7c6b2d0 Pull Request resolved: pytorch#83944
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
This PR addresses issue [#81075](#81075), making `torch.stft` compatible with ONNX Opset 17's STFT operator. The conversion works for _most_ of `torch.stft` functionality: - Batched or unbatched inputs - Normalization - Pre-computed windows - Rectangular windows - One-sided returns - Window centering (implicitly supported) What is currently _not_ supported is **complex types**, due to the lack of conversion functionality between PyTorch and ONNX (#86746). Regardless, this is easy to bypass by setting `return_complex=False` when using `torch.stft`. Note that there is already a draft PR to address this (#83944), but it is currently closed and it only partially addresses the conversion (i.e., most of `torch.stft` functionality is lacking, and unit tests are missing). Pull Request resolved: #92087 Approved by: https://github.com/justinchuby
This PR addresses issue [#81075](pytorch/pytorch#81075), making `torch.stft` compatible with ONNX Opset 17's STFT operator. The conversion works for _most_ of `torch.stft` functionality: - Batched or unbatched inputs - Normalization - Pre-computed windows - Rectangular windows - One-sided returns - Window centering (implicitly supported) What is currently _not_ supported is **complex types**, due to the lack of conversion functionality between PyTorch and ONNX (pytorch/pytorch#86746). Regardless, this is easy to bypass by setting `return_complex=False` when using `torch.stft`. Note that there is already a draft PR to address this (pytorch/pytorch#83944), but it is currently closed and it only partially addresses the conversion (i.e., most of `torch.stft` functionality is lacking, and unit tests are missing). Pull Request resolved: pytorch/pytorch#92087 Approved by: https://github.com/justinchuby
Stack from ghstack (oldest at bottom):
#81075
TODO: