Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For some Const node: "initializer shape is inconsistent" #83

Closed
pengwa opened this issue Jul 19, 2018 · 7 comments
Closed

For some Const node: "initializer shape is inconsistent" #83

pengwa opened this issue Jul 19, 2018 · 7 comments

Comments

@pengwa
Copy link
Collaborator

pengwa commented Jul 19, 2018

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 100, in <module>
    main()
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 92, in main
    optimize=not args.continue_on_error)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/graph.py", line 427, in make_model
    raise ValueError("initializer shape is inconsistent")
ValueError: initializer shape is inconsistent
python3 -m tf2onnx.convert --input /tmp/frozen/dcgan_2.pb --inputs y:0,z:1 --outputs generator/Sigmoid:0 --output dcgan_2.pb.onnx --verbose --continue_on_error
generated onnx is located/home/pengwang/dcgan_2.pb.onnx
------ switch back to original directory: /home/pengwang/community/learning/onnx

The problematic node looks as below:

node {
  name: "generator/g_bn2/Const_1"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
          }
        }
      }
    }
  }
}

https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/graph.py#L421, in runtime, the values are:
list(shape) is [0]
initializer.dims is [1]

Since the above constant is a scalar, I think initializer.dims might get wrong?
initializer.dims is created in add_initializer, which is called by https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/tfonnx.py#L206, where when i print(tensor), and print(type(tensor)), it shows

name: "value"
t {
  dims: 1
  data_type: FLOAT
  float_data: 0.0
  name: "generator/g_bn2/Const:1"
}
type: TENSOR

<class 'onnx_pb2.AttributeProto'>

Any thoughts? @guschmue

-------------------paste the good const node as compassion in below ------------------------------

The other hand, example const nodes that works well

node {
  name: "generator/g_bn2/AssignMovingAvg/decay"
  op: "Const"
  attr {
    key: "_class"
    value {
      list {
        s: "loc:@generator/g_bn2/moving_mean"
      }
    }
  }
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
        }
        float_val: 0.10000000149011612
      }
    }
  }
}


node {
  name: "generator/g_h3/biases"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
            size: 1
          }
        }
        float_val: 0.0900091901421547
      }
    }
  }
}


node {
  name: "generator/ones_1/shape_as_tensor"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 4
          }
        }
        tensor_content: "@\000\000\000\016\000\000\000\016\000\000\000\n\000\000\000"
      }
    }
  }
}

node {
  name: "generator/g_h3/w"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
            size: 5
          }
          dim {
            size: 5
          }
          dim {
            size: 1
          }
          dim {
            size: 138
          }
        }
		tensor_content: ""
      }
    }
  }
}
@guschmue
Copy link
Contributor

I remember that there was some issue with caffe2 and winml that some tensor ops don't work if you use a scalar but they are perfectly happy if you pass [scalar] and I think we intentionally made a change to use [scalar] for that reason. Let me look at this again, it is some time ago and I don't know if caffe2 still needs this. We could maybe ifdef this for the caffe2 runtime ... this target thing was meant to allow runtime specific workarounds.

if ctx.is_target(TARGET_CAFFE2):

@pengwa
Copy link
Collaborator Author

pengwa commented Jul 19, 2018

Thanks @guschmue for the reply, while this issue should be in tf-graph->onnx conversion stage, not yet hit the caffe2 runtime.

@pengwa
Copy link
Collaborator Author

pengwa commented Jul 19, 2018

Do you mean let me comment the https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/graph.py#L421 with check "if ctx.is_target(TARGET_CAFFE2):"?

@guschmue
Copy link
Contributor

void my comment ,,, not the same. I'll look at it today.

@pengwa
Copy link
Collaborator Author

pengwa commented Jul 19, 2018

I just comment the L421 check to unblock the model conversion, so maybe we can consider to loose the shape==dim check for some special case. anyway, we can discuss once you get better context later. :)

@guschmue
Copy link
Contributor

sure

@pengwa
Copy link
Collaborator Author

pengwa commented Jan 25, 2019

not having this issue recently, close this unless we got this failure again.

@pengwa pengwa closed this as completed Jan 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants