Allow defining customized PythonOp shape inferer#17093
Merged
Merged
Conversation
8081e63 to
af446cf
Compare
ajindal1
reviewed
Aug 10, 2023
ajindal1
previously approved these changes
Aug 10, 2023
Contributor
ajindal1
left a comment
There was a problem hiding this comment.
Just added 1 comment, otherwise LGTM.
…pengwa/python_op_shape_infer
ajindal1
approved these changes
Aug 11, 2023
…pengwa/python_op_shape_infer
…pengwa/python_op_shape_infer
kleiti
pushed a commit
to kleiti/onnxruntime
that referenced
this pull request
Mar 22, 2024
### Allow defining customized PythonOp shape inferer For `torch.autograd.Function`, we converted it to PythonOp in MSDomain, there are two places to do shape inferencing for it: 1. in SymbolicShapeInfer, there is one. 2. in PythonOp op definition. For common PythonOp, since we don't know the relation ship between inputs and outputs, so we only infer the rank from output ranks, and generate symbolic dimensions for each dim. While this will introduce many meaningless symbolic dimensions, sometimes blocking our graph transformers to do op fusion. This PR provide a way to define custom shape inferencing for `torch.autograd.Function` we defined, to propagate the original dimensions across the PythonOp at the best efforts. But the 2rd one is not covered yet, we could refine that later. Fixing 1st one is enough for ORTModule training/evaluation. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Allow defining customized PythonOp shape inferer
For
torch.autograd.Function, we converted it to PythonOp in MSDomain, there are two places to do shape inferencing for it:For common PythonOp, since we don't know the relation ship between inputs and outputs, so we only infer the rank from output ranks, and generate symbolic dimensions for each dim. While this will introduce many meaningless symbolic dimensions, sometimes blocking our graph transformers to do op fusion.
This PR provide a way to define custom shape inferencing for
torch.autograd.Functionwe defined, to propagate the original dimensions across the PythonOp at the best efforts.But the 2rd one is not covered yet, we could refine that later. Fixing 1st one is enough for ORTModule training/evaluation.
Motivation and Context