-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[onnx.export] Avoid linear loop over symbol_dim_map #123029
[onnx.export] Avoid linear loop over symbol_dim_map #123029
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/123029
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 9d90fb6 with merge base 352a893 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -89,33 +89,34 @@ namespace diagnostics = ::torch::onnx::diagnostics; | |||
|
|||
c10::ShapeSymbol ONNXDimToShapeSymbol( | |||
const onnx::TensorShapeProto_Dimension& dim, | |||
SymbolDimMap& symbol_dim_map) { | |||
SymbolDimMap& symbol_dim_map, | |||
DimSymbolMap& dim_symbol_map) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helpful to annotate the param?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As other prs I would include the rationale for this param as a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good thinking! I get confused myself if I haven't look at it for a while. I've added a comment that explains why we need both ways.
In writing this, I actually think an argument could be made that symbol_dim_map
could be removed and created once at the end, since dim_symbol_map
is the look-up direction that seems to be happening actively during the export. Anyway, it would need a deeper look, so for now I'm not going to push on this.
@justinchuby Is this ready to merge? Thanks! |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
8d421d8
to
7f8e687
Compare
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
- Doing a reverse look-up in `symbol_dim_map` incurs a linear cost in number of symbols. This happens for each node, so incurs a quadratic cost to the whole export. - Add a reverse look-up `dim_symbol_map` that is kept in parallel of `symbol_dim_map`. This avoids a linear time look-up, which creates a quadratic export time complexity. - This is a highly pragmatic solution. If someone more familiar with the code base has a better solution, I'm interested to hear about it. - Resolves (9) in pytorch#121422.
7f8e687
to
9d90fb6
Compare
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
@justinchuby I had to rebase this one to fix a merge conflict, so I'll need another merge from you. Thanks! 🙏 |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This PR is part of an effort to speed up torch.onnx.export (pytorch#121422). - Doing a reverse look-up in `symbol_dim_map` incurs a linear cost in number of symbols. This happens for each node, so incurs a quadratic cost to the whole export. - Add a reverse look-up `dim_symbol_map` that is kept in parallel of `symbol_dim_map`. This avoids a linear time look-up, which creates a quadratic export time complexity. - This is a highly pragmatic solution. If someone more familiar with the code base has a better solution, I'm interested to hear about it. - Resolves (9) in pytorch#121422. (partial fix of pytorch#121422) Pull Request resolved: pytorch#123029 Approved by: https://github.com/justinchuby
This PR is part of an effort to speed up torch.onnx.export (#121422).
symbol_dim_map
incurs a linear cost in number of symbols. This happens for each node, so incurs a quadratic cost to the whole export.dim_symbol_map
that is kept in parallel ofsymbol_dim_map
. This avoids a linear time look-up, which creates a quadratic export time complexity.(partial fix of #121422)