New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.import_graph_def: graph_def is invalid at node #4044
Comments
Hi, has there been any progress on this problem ? As I am getting similar symptoms. |
I've had the same problem, can anyone give a solution? |
Looks like a bug in freeze graph, right @petewarden ? |
I'm also experiencing this problem when trying to load a previously frozen facenet graph. Is there any progress on this? Is there a way to help debug this issue? |
Come across a similar issue when import_graph_def(graph_def, name=""), any solution?My error was caused by an incorrect "graph.pb" which was saved by the following script, but I used a wrong [output_node]. So when I tried to import the "incorrect" graph, it gives me the error message above. Hope this helps in case anyone has made the same silly mistake as I. from tensorflow.python.framework import graph_util |
Does anyone have a simple repro case with the simplest graph possible? |
I have the same issue,
ValueError: graph_def is invalid at node 'word_embeddings/Variable/Assign': Input tensor 'word_embeddings/Variable:0' Cannot convert a tensor of type float32 to an input of type float32_ref. |
Closing since we don't have a simple repro case. |
How do we know the output_node name for my trained model? @hellowangqian |
Any update on the error above? ValueError: graph_def is invalid at node 'lstm_3/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3': Input tensor 'lstm_3/TensorArray_1:0' Cannot convert a tensor of type resource to an input of type float32. |
please reopen, |
I've encountered the same problem, using the example model from After Only |
It's possibly because you trained and freezed a model using an old version tensorflow, and then you want to import the graph using a new version one. |
Nagging Assignee @petewarden: It has been 44 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
This issue was created for an old TensorFlow version. If you still face the same problem with the new version, please file a new issue. |
@Kongsea @petewarden @wt-huang I have maintained version consistency , but still i am not able to solve it Graph_def is invalid at node u'ExpandDims': Input tensor 'image_ph:0' Cannot convert a tensor of type float32 to an input of type int32. |
I've been trying to import a frozen graph into a new program, and do a simple forward pass, but
tf.import_graph_def
has been throwing a ValueError that I really can't make sense of.Environment info
Operating System: Ubuntu 14.04 LTS 64-bit
Installed version of CUDA and cuDNN: none
If installed from source, provide
git rev-parse HEAD
): fc91629bazel version
Steps to reproduce
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
tosample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b), name="sample_prediction")
checkpoint.ckpt
andgraph.pb
have been createdbazel build tensorflow/python/tools:freeze_graph && bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=/home/me/Documents/graph.pb --input_checkpoint=/home/me/Documents/checkpoint.ckpt --output_graph=/home/me/Documents/frozen_graph.pb --output_node_names=sample_prediction
frozen_graph.pb
has been createdWhat have you tried?
saved_sample_output
, and when I tried importing that frozen graph, the error complained aboutsaved_sample_output:0
. I tried removing the name, re-writing the checkpoint and graph files, re-freezing, and re-running the code. It then complained aboutVariable_17:0
, which, after checkinggraph.pb
, was what had originally been namedsaved_sample_output
. Other than that, I haven't been able to find anything else out.import_graph_def
never had an input map to begin with.Logs or other output that would be helpful
The text was updated successfully, but these errors were encountered: