Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operations used for inference are dropped by optimize_for_inference #8242

Closed
tristandeleu opened this issue Mar 9, 2017 · 6 comments
Closed
Assignees
Labels
stat:awaiting response Status - Awaiting response from author

Comments

@tristandeleu
Copy link

Inspired by the TensorFlow for Poets, I have been exporting models optimized for inference with the freeze_graph and optimize_for_inference. I have run into an issue where some of the nodes required for inference get dropped by optimize_for_inference. The most critical one being the output node being dropped, even though it was explicitly given to freeze_graph and optimize_for_inference (through the output_node_name/output_names).

I think that might be related to the output node being a tf.identity (to give an explicit name to the result of a tf.layers for example).

Minimal working example

Here is a piece of code to create a very simple model, running on TensorFlow v.1.0.1.

import tensorflow as tf

l_input = tf.placeholder(tf.float32, shape=(None, 2), name='input')
l_dense = tf.layers.dense(l_input, units=1, activation=None)
l_output = tf.identity(l_dense, name='output')

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver = tf.train.Saver(tf.global_variables())
    
    # Save GraphDef
    tf.train.write_graph(sess.graph_def, '.', 'graph.pb')
    # Save Checkpoint
    saver.save(sess, 'model.ckpt', write_meta_graph=False)

I am exporting the model using the freeze_graph and optimize_for_inference tools, inspired by the TensorFlow for Poets post.

bazel-bin/tensorflow/python/tools/freeze_graph --input_graph graph.pb --input_checkpoint model.ckpt --output_graph graph_frozen.pb --output_node_name=output
bazel-bin/tensorflow/python/tools/optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=input --output_names=output

I am using Python to load both of these models (graph_frozen.pb and graph_optimized.pb). The model defined by graph_frozen.pb works as expected, but the model defined by graph_optimized.pb is missing some operations (import/dense/BiasAdd and import/output).

import tensorflow as tf
import numpy as np

# Data
x = np.random.rand(3, 2)

# Frozen Graph
with tf.gfile.GFile('graph_frozen.pb', 'rb') as f:
    graph_def_frozen = tf.GraphDef()
    graph_def_frozen.ParseFromString(f.read())

with tf.Graph().as_default() as graph:
    l_output, = tf.import_graph_def(graph_def_frozen,
        return_elements=['output:0'], 
        name='import'
    )
    print('Operations in Frozen Graph:')
    print([op.name for op in graph.get_operations()])
    # >>> [u'import/input', u'import/dense/kernel',
    #      u'import/dense/kernel/read', u'import/dense/bias',
    #      u'import/dense/bias/read', u'import/dense/MatMul',
    #      u'import/dense/BiasAdd', u'import/output']

    l_input = graph.get_tensor_by_name('import/input:0')

    with tf.Session(graph=graph) as sess:
        sess.run(l_output, feed_dict={l_input: x})

# Optimized Graph
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:
    graph_def_optimized = tf.GraphDef()
    graph_def_optimized.ParseFromString(f.read())

with tf.Graph().as_default() as graph:
    # Using `return_elements=['output:0']` raises a ValueError
    # ValueError: Requested return_element 'output:0' not found in graph_def.
    tf.import_graph_def(graph_def_optimized, name='import')
    print('Operations in Optimized Graph:')
    print([op.name for op in graph.get_operations()])
    # >>> [u'import/input', u'import/dense/kernel',
    #      u'import/dense/bias', u'import/dense/MatMul']

    l_input = graph.get_tensor_by_name('import/input:0')

    # Raises a KeyError
    # KeyError: "The name 'import/output:0' refers to a Tensor which does
    # not exist. The operation, 'import/output', does not exist in the graph."
    l_output = graph.get_tensor_by_name('import/output:0')
    
    with tf.Session(graph=graph) as sess:
        sess.run(l_output, feed_dict={l_input: x})

Environment info

  • Operating System: OSX 10.11.1
  • TensorFlow installed through pip (tensorflow-1.0.1-cp27-cp27m-macosx_10_11_x86_64.whl)
  • TensorFlow v.1.0.1
  • To export the models, I built freeze_graph and optimize_for_inference with bazel (version 0.4.3-homebrew) in 100552f
@tristandeleu tristandeleu changed the title Operations used by inference are dropped by optimize_for_inference Operations used for inference are dropped by optimize_for_inference Mar 9, 2017
@tristandeleu
Copy link
Author

tristandeleu commented Mar 23, 2017

I temporarily patched this issue in this special case by replacing the tf.identity node with

l_output = tf.multiply(l_dense, 1., name='output')

But I think optimize_for_inference should not drop operations that are actually used for inference, especially the output node.

@somewacko
Copy link

Just ran into this issue myself, but I feel like the bigger issue is that using tf.identity to name tensors is bad practice, and there should be some other way of re-naming important tensors that you want to be able to fetch easily later on. (especially when you're using a library to help build layers and don't have control of your tensor names when they're created)

Also I'd imagine that using tf.multiply is a bad temporary solution, since it might add additional runtime if you have a large output (e.g. for segmentation) and possibly change the output if you quantize the model later on. The better solution would be to inspect what the actual name of your output tensor is and use that instead.

@petewarden
Copy link
Contributor

I'm not sure of the underlying issue here, but I'm hoping that the new graph transform approach to removing unused nodes might be more robust?
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms/#optimizing-for-deployment

@gunan gunan added the stat:awaiting response Status - Awaiting response from author label Jun 16, 2017
@petewarden
Copy link
Contributor

Since there's been no activity on this one for several weeks, I'm closing this for now. Please reopen with more information if this is incorrect.

@bendaf
Copy link

bendaf commented Jul 21, 2017

I've ran into the same issue with tensorflow 1.2.1. Optimized the default tiny-yolo-voc model from darkflow and the output node disappeared.

@jeffxtang
Copy link
Contributor

Definitely a bug in both strip_unused and optimize_for_inference tools. Use transform_graph fixed this! See my answer at https://stackoverflow.com/questions/48212068/error-using-model-after-using-optimize-for-inference-py-on-frozen-graph/48638586#48638586 for detail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author
Projects
None yet
Development

No branches or pull requests

6 participants