You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Reshape documentation is not clear on whether reshaping from [0,10] to new shape [0,1,-1] is legal when attribute 'allowzero' is NOT set.
The relevant sentence is:
At most one dimension of the new shape can be -1. In this case, the value is inferred from the size of the tensor and the remaining dimensions.
In the example, the input volume is 0, and the first two dimensions of the output tensor are [0,1]. The output tensor thus has a volume of 0 for any value inferred for the -1 wildcard. Thus by one reading of the documentation the example is illegal.
However, another interpretation would be that when an input dimension is forwarded (because the new shape specifies 0 and allowzero is not set), then the dimension is ignored for purposes of inferring the -1 wildcard. I.e., the inference question is treated as equivalent to inferring the -1 wildcard for reshaping [10] to [1,-1].
Which interpretation is intended?
The text was updated successfully, but these errors were encountered:
Good question. Looking at the onnx shape inference implementation, it computes the complete products (not skipping the forwarded dimensions). Hence, it will not be able to infer an output dimension and will flag this as an error here.
However, the more general interpretation could be useful in some situations, I guess ... it may be worth investigating whether backend implementations support it.
Do you see any examples/models where this will be useful? If so, it may be worth updating the spec to allow it.
It seems to be useful for dealing with batch dimensions that might be zero. For example, reshaping from [n,a,b] to [0,-1] where n is the batch dimension.
We (Nvidia TensorRT group) ran into the issue with fasterrcnn_resnet50_fpn.onnx (I think it's derived from here)> and accidentally fed it random data. I'm guessing there's some kind of internal batch dimension there, with a data-dependent length.
On the other hand, the "forwarding 0" is dangerous with networks that contain empty tensors, so there's much to be said for just discouraging "forwarding 0", even if it helps the use of wildcard -1. In retrospect, "forwarding -2" would have been a much better design, but Caffe chose 0.
Ask a Question
Question
The Reshape documentation is not clear on whether reshaping from [0,10] to new shape [0,1,-1] is legal when attribute 'allowzero' is NOT set.
The relevant sentence is:
In the example, the input volume is 0, and the first two dimensions of the output tensor are [0,1]. The output tensor thus has a volume of 0 for any value inferred for the -1 wildcard. Thus by one reading of the documentation the example is illegal.
However, another interpretation would be that when an input dimension is forwarded (because the new shape specifies 0 and
allowzero
is not set), then the dimension is ignored for purposes of inferring the -1 wildcard. I.e., the inference question is treated as equivalent to inferring the -1 wildcard for reshaping [10] to [1,-1].Which interpretation is intended?
The text was updated successfully, but these errors were encountered: