Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Squeeze shape inference #3516

Merged
merged 3 commits into from
Jun 15, 2021

Conversation

matteosal
Copy link
Contributor

@matteosal matteosal commented Jun 4, 2021

Fixes #3515

There seem to be 2 problems here:

  1. Shape inference should not give up as soon as a symbolic dimension is found, but only if the axes spec wants to act on that dimension, i.e. if it's absent or if it contains that dimension explicitly
  2. In case shape inference gives up, it should not predict a rank of 0

I have fixed (1) but (2) still remains to be addressed.

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>
@matteosal matteosal requested a review from a team as a code owner June 4, 2021 17:19
@gramalingam gramalingam added operator Issues related to ONNX operators shape inference Issues related to shape inference labels Jun 8, 2021
@gramalingam
Copy link
Contributor

Can't we move line 1634 to after the first loop to fix (2)?

for (int i = 0; i < input_ndim; ++i) {
if (!input_shape.dim(i).has_dim_value()) {
return;
if(!input_shape.dim(i).has_dim_value() && (axes_not_specified || std::find(axes.begin(), axes.end(), i) != axes.end())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, I think we can omit the || std::find(axes.begin(), axes.end(), i) != axes.end()) part. Thus, we assume that the axes specification is correct for shape-inference purpose. If it is incorrect, the runtime execution needs to catch and report it anyway. Does that make sense? However, this would require minor changes to the loop below in line 1657 ... we should check if input_shape_has_dim_value before accessing it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And what should happen in the loop below if has_dim_value is false? I still see no option other than giving up the shape inference.
I understand your point about the checking that the symbolic size is actually 1 at runtime, that should happen anyway. But still shape inference should give up (i.e. restrain from specifying anything about the output shape) in this case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a fine point of distinction. In the loop below, if the outer condition is true (that is, axes is specified and contains i), then we should omit dim(i) from the output. The idea is that the shape-inference contract is that it indicates what the output shape will be IF the op executes successfully and produces an output. Type and shape inference does NOT guarantee that the op will execute successfully (because of the need for some runtime checks). Does that make sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are not convinced, we can merge this as is, and we can do this as a separate PR later after discussion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that the shape-inference contract is that it indicates what the output shape will be IF the op executes successfully and produces an output. Type and shape inference does NOT guarantee that the op will execute successfully (because of the need for some runtime checks). Does that make sense?

Ok so you are suggesting to omit dim(i) from the output if:

  • dim(i) is numerical and equal to 1 (usual case) OR
  • dim(i) is symbolic

In the second case, runtimes should obviously check that dim(i) is actually 1 when the Squeeze op is evaluated. Is that correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added the change, please let me know if it's ok

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>
@matteosal
Copy link
Contributor Author

Can't we move line 1634 to after the first loop to fix (2)?

Yes, that worked

@matteosal
Copy link
Contributor Author

@gramalingam any news on this?

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>
@gramalingam gramalingam merged commit 7a2bd35 into onnx:master Jun 15, 2021
hwangdeyu pushed a commit to hwangdeyu/onnx that referenced this pull request Jun 17, 2021
* Return early only if axes spec wants to act on symbolic dim

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>

* Dont' predict a rank of 0 when shape can't be inferred

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>

* Remove symbolic dims in the axes spec, don't give up inference

Signed-off-by: Matteo Salvarezza <matteo.salvarezza@gmail.com>
Signed-off-by: hwangdeyu <dejack953@outlook.com>
@matteosal matteosal deleted the bugfix/SqueezeShapeInference branch April 26, 2023 12:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
operator Issues related to ONNX operators shape inference Issues related to shape inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Squeeze shape inference failure with symbolic dimension
2 participants