-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
freeze_graph not initializing tables #8665
Comments
|
@petewarden, do you have any insight into this? |
|
Freezing_problem.zip Models are trained with the following command: Then freeze_2_textsum.py is called with the following syntax: In this case, we are able to find the saved constants in the frozen_model.pb file. python test_tf_frozen_txtsum.py |
|
@petewarden is this still an issue? |
|
One solution to this is to look at calling explicit initialization in the freeze_graph.py script, by specifying the You'll need to know the name of the node that initializes your tables or other data structures though. |
|
I believe I tried that, but its always possible I did it wrong. Ultimately I have moved away from lookup tables since the java bindings still don't have support for passing arrays of strings. If that is added I will be interested in the issue again. |
|
seeing the same problems here. it doesn't seem like the |
|
I have a node: |
|
It's possible there's a bug in the way that |
|
yea I verified that the line of code is being called. but inside It doesn't seems like it actually use anything from the |
|
Any update for this issue? |
|
@petewarden ^^ |
|
ping. is there anyone can help to look into this bug? |
|
any updates? |
|
I have the same problem, I get this error: I am trying to store my graph as a protobuf and then load it, when I run the graph I get the error above. However when I run the graph directly (w\o storing it to .pb first) it works fine. with tf.gfile.GFile(frozen_file_name, "wb") as f:
f.write(frozen_graph_def.SerializeToString())
g = load_graph(frozen_file_name)
with tf.Session(graph=g, config=utils.get_config_proto()) as sess1:
prefix = 'prefix/'
sess1.run(
prefix + infer_model.iterator.initializer.name,
feed_dict={
prefix + infer_model.src_placeholder.name: infer_data,
prefix + infer_model.batch_size_placeholder.name: hparams.infer_batch_size
}) |
|
Here is a minimal example of this issue: import os
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops.lookup_ops import HashTable, KeyValueTensorInitializer
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
OUTPUT_FOLDER = '/tmp'
OUTPUT_NAME = 'hash_table.pb'
OUTPUT_NAMES = ['output']
def build_graph():
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
init = KeyValueTensorInitializer(d.keys(), d.values())
hash_table = HashTable(init, default_value=-1)
data = tf.placeholder(tf.string, (None,), name='data')
values = hash_table.lookup(data)
output = tf.identity(values * 2, 'output')
def freeze_graph():
with tf.Graph().as_default() as graph:
build_graph()
with tf.Session(graph=graph) as sess:
sess.run(tf.tables_initializer())
print sess.run('output:0', feed_dict={'data:0': ['a', 'b', 'c', 'd', 'e']})
frozen_graph = convert_variables_to_constants(sess, sess.graph_def, OUTPUT_NAMES)
tf.train.write_graph(frozen_graph, OUTPUT_FOLDER, OUTPUT_NAME, as_text=False)
def load_frozen_graph():
with open(os.path.join(OUTPUT_FOLDER, OUTPUT_NAME), 'rb') as f:
output_graph_def = tf.GraphDef()
output_graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(output_graph_def, name='')
with tf.Session(graph=graph) as sess:
print sess.run('output:0', feed_dict={'data:0': ['a', 'b', 'c', 'd', 'e']})
if __name__ == '__main__':
freeze_graph()
load_frozen_graph()Output: It seems like Any help would be appreciated. |
|
Adding import os
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops.lookup_ops import HashTable, KeyValueTensorInitializer
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
OUTPUT_FOLDER = '/tmp'
OUTPUT_NAME = 'hash_table.pb'
OUTPUT_NAMES = ['graph/output', 'init_all_tables']
def build_graph():
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
init = KeyValueTensorInitializer(d.keys(), d.values())
hash_table = HashTable(init, default_value=-1)
data = tf.placeholder(tf.string, (None,), name='data')
values = hash_table.lookup(data)
output = tf.identity(values * 2, 'output')
def freeze_graph():
with tf.Graph().as_default() as graph:
with tf.name_scope('graph'):
build_graph()
with tf.Session(graph=graph) as sess:
sess.run(tf.tables_initializer())
print sess.run('graph/output:0', feed_dict={'graph/data:0': ['a', 'b', 'c', 'd', 'e']})
frozen_graph = convert_variables_to_constants(sess, sess.graph_def, OUTPUT_NAMES)
tf.train.write_graph(frozen_graph, OUTPUT_FOLDER, OUTPUT_NAME, as_text=False)
def load_frozen_graph():
with open(os.path.join(OUTPUT_FOLDER, OUTPUT_NAME), 'rb') as f:
output_graph_def = tf.GraphDef()
output_graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(output_graph_def, name='')
with tf.Session(graph=graph) as sess:
try:
sess.run(graph.get_operation_by_name('init_all_tables'))
except KeyError:
pass
print sess.run('graph/output:0', feed_dict={'graph/data:0': ['a', 'b', 'c', 'd', 'e']})
if __name__ == '__main__':
freeze_graph()
load_frozen_graph()The call to |
|
Thanks @jkiske your workaround works for me |
|
It has been 14 days with no activity and this issue has an assignee.Please update the label and/or status accordingly. |
|
Nagging Assigneee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
A member of the TensorFlow organization has replied after the stat:awaiting tensorflower label was applied. |
|
Nagging Assignee: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
PING! Any progress? |
|
@jkiske 's workaround worked for me! We also built on top of it to make it work with
So here's a solution that works for us: import os
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops.lookup_ops import HashTable, KeyValueTensorInitializer
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
OUTPUT_FOLDER = '/tmp'
OUTPUT_NAME = 'hash_table.pb'
OUTPUT_NAMES = ['graph/output']
def build_graph():
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
init = KeyValueTensorInitializer(d.keys(), d.values())
hash_table = HashTable(init, default_value=-1)
data = tf.placeholder(tf.string, (None,), name='data')
values = hash_table.lookup(data)
output = tf.identity(values * 2, 'output')
def freeze_graph():
with tf.Graph().as_default() as graph:
with tf.name_scope('graph'):
build_graph()
with tf.Session(graph=graph) as sess:
sess.run(tf.tables_initializer())
print sess.run('graph/output:0', feed_dict={'graph/data:0': ['a', 'b', 'c', 'd', 'e']})
for table_init_op in tf.get_collection(tf.GraphKeys.TABLE_INITIALIZERS):
OUTPUT_NAMES.append(table_init_op.name)
frozen_graph = convert_variables_to_constants(sess, sess.graph_def, OUTPUT_NAMES)
tf.train.export_meta_graph(
filename=os.path.join(OUTPUT_FOLDER, OUTPUT_NAME),
graph_def=frozen_graph,
collection_list=[tf.GraphKeys.TABLE_INITIALIZERS])
def load_frozen_graph():
with tf.Graph().as_default() as graph:
tf.train.import_meta_graph(os.path.join(OUTPUT_FOLDER, OUTPUT_NAME))
with tf.Session(graph=graph) as sess:
sess.run(tf.tables_initializer())
print sess.run('graph/output:0', feed_dict={'graph/data:0': ['a', 'b', 'c', 'd', 'e']})
if __name__ == '__main__':
freeze_graph()
load_frozen_graph() |
|
Nagging Assignees @petewarden, @suharshs: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
1 similar comment
|
Nagging Assignees @petewarden, @suharshs: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 15 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 64 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 79 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 94 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
Nagging Assignees @petewarden, @suharshs: It has been 109 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
|
I think the appropriate workarounds are documented here. Additionally using SavedModel with Estimators handles this correctly. I am going to close this issue for now. |
I met the issue when using https://github.com/tensorflow/models/tree/master/official/wide_deep to do freeze graph and predict. |
use op.run instead of session.run(tensor), throws no error |
Does not work, inspired by tensorflow/tensorflow#8665
I am not sure if this is an actual bug or if its expected but undocumented behavior.
I have a model that uses multiple lookup tables created via string_to_index. I freeze the model like so:
bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=/tmp/tf/graph.pbtxt --input_checkpoint=/tmp/tf/model.ckpt-0 --output_graph=/tmp/ticker_classifier.pb --output_node_names=sigmoid --initializer_nodes=init_all_tablesHowever when the model is reloaded and I attempt to run it I get an error "Table not initialized." I get exactly the same resulting file whether I specify initializer_nodes or not. The behavior I was expecting was for the model to contain the lookup tables in a ready to use state for inference but I don't know if that is an unreasonable expectation.
What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?
I have not seen any issues related to this. I previously posted about this here http://stackoverflow.com/questions/42916383/how-to-properly-freeze-a-tensorflow-graph-containing-a-lookuptable
Environment info
Operating System: MacOS and Linux (CentOS 7)
Installed version of CUDA and cuDNN: None
If installed from source, provide
git rev-parse HEAD) 07bb8eaBuild target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
If possible, provide a minimal reproducible example (We usually don't have time to read hundreds of lines of your code)
I have been unable to make a small example but I can spend more time on it if needed.
What other attempted solutions have you tried?
The workaround is to add init_all_tables to the output_nodes and then run init_all_tables before feeding the session examples for inference. This does have the side effect of needing to distribute the source files for the tables to the same path on all nodes that was originally used for training.
The text was updated successfully, but these errors were encountered: