Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get tensor value: xxx must be Const #746

Closed
Exlsunshine opened this issue Nov 21, 2019 · 5 comments
Closed

Get tensor value: xxx must be Const #746

Exlsunshine opened this issue Nov 21, 2019 · 5 comments

Comments

@Exlsunshine
Copy link

Describe the bug

I have noticed that xxx must be const is a known issue, but I also found a PR which says ONNX has supported dynamic padding, but I still got the following error:

OP=Pad
Name=some/path/to/op/Pad
Inputs:
        some/path/to/op/concat:0=Concat, [-1, -1], 1
        some/path/to/op/Pad/paddings_Concat__113:0=Concat, [2, 2], 6
Outpus:
        some/path/to/op/Pad:0=[-1, -1], 1
Traceback (most recent call last):
  File "D:\Python\Python36\lib\site-packages\tf2onnx\tfonnx.py", line 352, in tensorflow_onnx_mapping
    func(g, node, **kwargs)
  File "D:\Python\Python36\lib\site-packages\tf2onnx\onnx_opset\nn.py", line 401, in version_1
    paddings = np.array(node.inputs[1].get_tensor_value()).transpose().flatten()
  File "D:\Python\Python36\lib\site-packages\tf2onnx\graph.py", line 256, in get_tensor_value
    raise ValueError("get tensor value: {} must be Const".format(self.name))
ValueError: get tensor value: some/path/to/op/Pad/paddings_Concat__113 must be Const
2019-11-21 15:52:39,788 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s)
2019-11-21 15:52:40,098 - ERROR - tf2onnx.tfonnx: Failed to convert node some/path/to/op/Fill

Does tf2onnx plan to support dynamic padding?

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Win10
  • Tensorflow Version: 1.13.1
  • Python version: 3.6.6
  • ONNX: 1.6.0
  • ONNXRUNTIME: 1.1.10
  • tf2onnx: 1.5.3
@guschmue
Copy link
Contributor

try --opset 11 in the command line.
We have not pushed this to pypi yet so you'd need to user master.

@Exlsunshine
Copy link
Author

Exlsunshine commented Nov 22, 2019

try --opset 11 in the command line.
We have not pushed this to pypi yet so you'd need to user master.

Hi @guschmue , I have tried opset 11 and install tf2onnx from master, but still got the same error. And I forget to mention that there's another op keeps telling me the same error, I always get the same result regardless of using opset 11/10/9/8/7 or installing tf2onnx from pypi/master:

2019-11-22 10:02:17,020 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s)
2019-11-22 10:02:17,195 - ERROR - tf2onnx.tfonnx: Failed to convert node parallel_0/while/Fill
OP=ConstantOfShape
Name=parallel_0/while/Fill
Inputs:
        parallel_0/while/Fill__674:0=Cast, [1], 7
        parallel_0/while/strided_slice_21__666:0=Squeeze, [], 6
Outpus:
        parallel_0/while/Fill:0=[-1], 6
Traceback (most recent call last):
  File "tensorflow-onnx\tf2onnx\tfonnx.py", line 354, in tensorflow_onnx_mapping
    func(g, node, **kwargs)
  File "tensorflow-onnx\tf2onnx\onnx_opset\generator.py", line 100, in version_9
    value = np.array([node.inputs[1].get_tensor_value()]).astype(utils.map_onnx_to_numpy_type(dtype))
  File "tensorflow-onnx\tf2onnx\graph.py", line 260, in get_tensor_value
    raise ValueError("get tensor value: {} must be Const".format(self.name))
ValueError: get tensor value: parallel_0/while/strided_slice_21__666 must be Const

@jignparm
Copy link
Contributor

The error after --opset 11 is slightly different.

Could you attach the model you are converting (to ensure the solution works for your scenario)? The Fill operator should be refactor-able to work with dynamic inputs.

@jignparm
Copy link
Contributor

jignparm commented Dec 17, 2019

@Exlsunshine, thanks for sharing the model offline.

The Fill operator is updated by #748, and the Expand_Dims operator is addressed by #753

The other fix required is not in TF2ONNX, but in OnnxRuntime, for the Reshape operator. This requires a change in the ONNX spec to make Reshape behave similarly to NumPy and TensorFlow. The current spec has an idiosyncrasy which causes the wrong shape to be produced (e.g. if a tensor of shape [0,1] is reshaped to [1,0], it will end up as [1,1] instead, which is not intuitive/correct).

The ONNX issue is raised here onnx/onnx#2507. Once that is approved by the community, the required code changes will be made in ONNXRuntime via PR here microsoft/onnxruntime#2656

@guschmue
Copy link
Contributor

guschmue commented Apr 7, 2021

I assume this is fixed with opset-13.

@guschmue guschmue closed this as completed Apr 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants