Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add transpose optimizer, and integrate it after tf graph conversion #108

Merged
merged 7 commits into from Aug 24, 2018
Merged

add transpose optimizer, and integrate it after tf graph conversion #108

merged 7 commits into from Aug 24, 2018

Conversation

@pengwa
Copy link
Collaborator

@pengwa pengwa commented Aug 22, 2018

No description provided.

@pengwa
Copy link
Collaborator Author

@pengwa pengwa commented Aug 22, 2018

nothing changes for running the converter.

But I do enable the tensorflow constant fold.

Loading

@@ -94,10 +95,14 @@ def main():
"converted from {}".format(args.input), args.inputs, args.outputs,
optimize=not args.continue_on_error)

onnx_graph = OnnxGraph(model_proto.graph)
optimizer = TransposeOptimizer(onnx_graph)
Copy link
Collaborator

@guschmue guschmue Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make this optional via command line, don't think this tested well enough

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, let me do that.

Loading

@@ -0,0 +1,379 @@
from __future__ import division
from __future__ import print_function

Copy link
Collaborator

@guschmue guschmue Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just use graph.py ... most of the code in this file looks the same.

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

originally I am trying to re-use graph.py, while I found more onnx graph specific things are needed, so gradually changed a lot, so I would prefer to make it a new simpler classes. I'm actually trying to make the class even more simpler.

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OnnxGraph almost have most of its member function rewritten, to 1) update output nodes/output node number, 2). get those unused node/initilizers removed automatically once set_nodes() is called.

Loading

Copy link
Collaborator

@guschmue guschmue Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want exactly one internal onnx graph / node representation. If there are changes you need in graph.py I'm sure that can be done.

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, let me try

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guschmue removed onnx_graph.py by resuing graph.py.

Loading

return
switch_transpose_and_node(g, p, trans)
return True

Copy link
Collaborator

@guschmue guschmue Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about ops like stridedslice ? I think you'd need to make a pass through all ops if any of the attributes is impacted.

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for ops that are not listed explicitly in the if branch, the transposes before it will remain there, so the correctness of the graph can be guaranteed.

I'll add stridedslice support later.

Loading

@@ -1369,7 +1369,7 @@ def tensorflow_onnx_mapping(g, continue_on_error, custom_op_handlers):
def tf_optimize(sess, inputs, outputs, graph_def):
"""Optimize tensorflow graph for inference."""
transforms = [
# "fold_constants(ignore_errors=true)",
Copy link
Collaborator

@guschmue guschmue Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fold_constants on some graphs runs into a tensorflow issue ... that is the reason why it is off.

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe I can enable it once use decides to enable the optimization? because current optimization depends on this, otherwise, lots of transpose remains. but eventually I will have a try when it is disabled.

Loading

Copy link
Collaborator

@guschmue guschmue Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that would be ok.
The error message you'd get is 'expects to be colocated with unknown node ' and there are some reports of this (I thought I had seen some detailed tensorflow issue on this but can't find it anymore).
We do have models that runs into this but I have not looked closer ... maybe time to understand what the root cause is.

Loading

@@ -206,11 +206,10 @@ def run_onnxmsrt(self, name, onnx_graph, inputs):
self.onnx_runtime = time.time() - start
return results

def run_onnxmsrtnext(self, name, onnx_graph, inputs):
def run_onnxmsrtnext(self, name, model_proto, inputs):
"""Run test against msrt-next backend."""
Copy link
Collaborator

@guschmue guschmue Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when running this with your changes:
mobilenet_v1_75_192
downloaded /tmp/pre-trained\mobilenet_v1_0.75_192_frozen\mobilenet_v1_0.75_192/frozen_graph.pb
2018-08-22 14:28:57.717831: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_constants
2018-08-22 14:28:57.870910: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_batch_norms
2018-08-22 14:28:58.012064: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_old_batch_norms
tensorflow OK
before optimization: ops statistics: Counter({'Transpose': 58, 'Conv': 28, 'Add': 28, 'Max': 27, 'Min': 27, 'Mul': 13, 'Identity': 1, 'Softmax': 1, 'Reshape': 1, 'AveragePool': 1, 'Squeeze': 1})
to_onnx FAIL name 'TensorShapeProto' is not defined
run_onnx FAIL 'NoneType' object has no attribute 'SerializeToString'

mobilenet_v1_100_224
downloaded /tmp/pre-trained\mobilenet_v1_1.0_224_frozen\mobilenet_v1_1.0_224/frozen_graph.pb
2018-08-22 14:28:15.857182: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_constants
2018-08-22 14:28:16.066649: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_batch_norms
2018-08-22 14:28:16.295635: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_old_batch_norms
tensorflow OK
before optimization: ops statistics: Counter({'Transpose': 58, 'Conv': 28, 'Add': 28, 'Max': 27, 'Min': 27, 'Mul': 13, 'Identity': 1, 'Softmax': 1, 'Reshape': 1, 'AveragePool': 1, 'Squeeze': 1})
to_onnx FAIL name 'TensorShapeProto' is not defined
run_onnx FAIL 'NoneType' object has no attribute 'SerializeToString'

Loading

Copy link
Collaborator Author

@pengwa pengwa Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, my bad, it's a last minute change. I'll fix it. sorry.

Loading

@guschmue guschmue merged commit 85f7bf7 into onnx:master Aug 24, 2018
1 check passed
Loading
@pengwa
Copy link
Collaborator Author

@pengwa pengwa commented Aug 24, 2018

thanks @guschmue !!

Loading

@pengwa pengwa deleted the add_transpose_optimizer branch Aug 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants