Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NNVM][FRONTEND][ONNX] Fix the gemm conversion in onnx frontend #1241

Merged
merged 1 commit into from
Jun 11, 2018

Conversation

nishi-t
Copy link
Contributor

@nishi-t nishi-t commented Jun 6, 2018

This PR addressed #1231
I think that the problem caused by passing 4 dimensions no flatten data to dense operator. I locally confirmed that vgg19 is compiled successfully. Please review.

@nishi-t nishi-t changed the title Fix the gemm conversion in onnx frontend [NNVM][ONNX] Fix the gemm conversion in onnx frontend Jun 6, 2018
@nishi-t nishi-t changed the title [NNVM][ONNX] Fix the gemm conversion in onnx frontend [NNVM][FRONTEND][ONNX] Fix the gemm conversion in onnx frontend Jun 6, 2018
@@ -186,6 +186,7 @@ def _impl_v1(cls, inputs, attr, params):
inputs[0] = _sym.transpose(inputs[0], axes=(1, 0))
if not transB:
inputs[1] = _sym.transpose(inputs[1], axes=(1, 0))
inputs[0] = _sym.flatten(inputs[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we verify if flatten is necessary? i.e. check the dimension and only insert the flatten if necessary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nishi-t can you act on this comment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying it, but I have not already done so well. Because, I'm struggling to get the input data's dimension in the _impl_v1 for checking necessity of flatten. I try to use _infer_shape auxiliary function to get the input[0]'s output_shape. But, it does not work when input[0] is pooling. Could you give me some advices?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is strange that you cannot get the output shape when the input[0] is pooling. Maybe can you look into what happens here?

Copy link
Contributor Author

@nishi-t nishi-t Jun 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your advice. I analyzed what happened here and found out a few things.
In the stage of nnvm.frontend.from_onnx, maybe infer_shape does not work completly, because the input's shape may not be decided.

When I putted the following code before the flatten and ran it, out_shapes was [[]]

g = _graph.create(inputs[0])
shape_dict = {k: v.shape for k, v in params.items()}
_, out_shapes = graph_util.infer_shape(g, **shape_dict)                           

In following lines called from above infer_shape, dshape.ndim() became 0 and returned false.

https://github.com/dmlc/tvm/blob/master/nnvm/src/top/nn/pooling.cc#L33
https://github.com/dmlc/tvm/blob/master/nnvm/src/top/nn/convolution.cc#L62

It seems difficult to use infer_shape for my purpose without making any changes. Please let me know what you think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhreshold can you comment on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

infer shape is not available because input shape is not ready. Currently I cannot find a way to detect if flatten is necessary as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, maybe an easier way is to add an optimization to remove flatten in nnvm when it is not necessary. I am going to merge this in for now

@tqchen
Copy link
Member

tqchen commented Jun 11, 2018

OK, maybe an easier way is to add an optimization to remove flatten in nnvm when it is not necessary. I am going to merge this in for now

@@ -186,6 +186,7 @@ def _impl_v1(cls, inputs, attr, params):
inputs[0] = _sym.transpose(inputs[0], axes=(1, 0))
if not transB:
inputs[1] = _sym.transpose(inputs[1], axes=(1, 0))
inputs[0] = _sym.flatten(inputs[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, maybe an easier way is to add an optimization to remove flatten in nnvm when it is not necessary. I am going to merge this in for now

@tqchen tqchen merged commit 66fab7e into apache:master Jun 11, 2018
@tqchen tqchen mentioned this pull request Jun 12, 2018
tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
mnuyens pushed a commit to mnuyens/tvm that referenced this pull request Jul 10, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants