-
Notifications
You must be signed in to change notification settings - Fork 74.2k
-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to port a 6-layered mobilenet tflite model to mobile #21368
Comments
Following is the log of the TOCO converter. tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 224 operators, 311 arrays (0 quantized) I noticed that the RandomUniform is complaining about the seed. Is there a way I can set the seed for the /dev/random from the system? |
Correct me if I'm wrong, but those ops seem to be part of your model. If it's part of the model, it won't get stripped and you'll need custom ops for them. I'm curious about your model can you post it? |
Hi gragundier, Thanks for your reply.
|
It could be that you are now feeding in a placeholder input. You seem to have a queue and some loss functions. Could you provide the frozen graphdef (or even a screenshot of the graph) so that we can see what else it could be. @gargn, could you comment. |
Hi, Attached is the frozen model. |
I am still not able to understand this fully, although I was able to find a way to work around this problem. |
I ran the following command on the TensorFlow nightly build (installed using the command
I add this point because this error seems different than the one that you noted. I looked into the model using TensorBoard and it appears that your model is a MobileNet training graph containing the ops FIFOQueueV2, QueueDequeueV2, SquaredDifference. TensorFlow Lite only works with eval graphs, not training graphs. In order to create a MobileNet eval graph:
After you do this, try using the |
@gargn hi, Thanks for your reply!
|
I trained a straightforward model which contains only two convolutional layers. The freeze and tflite conversion when smoothly, but when I deploy to mobile, the application through a segmentation fault. |
Since you are able to convert and the segfault is a new issue, can you please provide the resulting segfault stack trace/core dump? |
Hi following is the error message. Is that the stack trace you referred to? 08-19 13:39:32.244 1583-1663/? A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x0 in tid 1663 (CameraBackgroun), pid 1583 (flitecamerademo) |
Nagging Assignee @gargn: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. |
I was able to get the following code working with last night's tf-nightly. It is based off of the Python code that you provided. The main difference is that it freezes the graph and converts the Flatbuffer to a TFLite model within the Python code itself. Can you clarify if this is what you are looking for:
In order to get the |
Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks! |
Please go to Stack Overflow for help and support:
https://stackoverflow.com/questions/tagged/tensorflow
If you open a GitHub issue, here is our policy:
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
No.
Ubuntu 16.04
Pixel 2
Source
1.8
2.7
0.15.2
5.4.0
N/A
N/A
import tensorflow as tf
from tensorflow.python.framework import graph_util
import os,sys
output_node_names = "MobilenetV1/Predictions/Reshape"
saver = tf.train.import_meta_graph('/home/users/saman/yitao/tensorflow_android/models/research/slim/batch_32/model.ckpt-156300.meta', clear_devices=True)
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
sess = tf.Session()
saver.restore(sess, "/home/users/saman/yitao/tensorflow_android/models/research/slim/batch_32/model.ckpt-156300")
output_graph_def = graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
input_graph_def, # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
output_graph="frozen-model-conv6-bat-32.pb"
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
sess.close()
(4) Optimize the model
bazel-bin/tensorflow/tools/graph_transforms/transform_graph
--in_graph=/home/yitao/TF_1.8/tensorflow/my_frozen_pb/frozen-model-conv6-bat-32.pb
--out_graph=/home/yitao/TF_1.8/tensorflow/my_frozen_pb/frozen-model-conv6-bat-32-optimized.pb
--inputs='input'
--outputs='MobilenetV1/Predictions/Reshape'
--transforms='
strip_unused_nodes(type=float, shape="1,32,32,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
(5) Convert the model to tflite
bazel-bin/tensorflow/contrib/lite/toco/toco
--input_file=/home/yitao/TF_1.8/tensorflow/my_frozen_pb/frozen-model-conv6-bat-32-optimized.pb
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE
--output_file=/home/yitao/TF_1.8/tensorflow/my_frozen_pb/frozen-model-conv6-bat-32-optimized.tflite --inference_type=FLOAT
--input_type=FLOAT --input_arrays=input
--seed2
--output_arrays=MobilenetV1/Predictions/Reshape --input_shapes=1,32,32,3
--allow_custom_ops
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
Not able to train a model from scratch and port it Android to utilize the Android Nerual Network api through TFLite. After training a model and following the steps to convert the graph to tflite model, there are still some ops that are not supported by the TFLite runtime in my graph. What should I do?
Any help is apreciated!
Logcat is throwing the following errors. It seems that those ops are not stripped from the model during the optimization step.
Source code / logs
08-03 15:14:52.183 10271-10271/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: main
Process: android.example.com.tflitecamerademo, PID: 10271
java.lang.RuntimeException: Unable to start activity ComponentInfo{android.example.com.tflitecamerademo/com.example.android.tflitecamerademo.CameraActivity}: java.lang.IllegalArgumentException: Internal error: Cannot create interpreter: Didn't find custom op for name 'RandomUniform' with version 1
Didn't find custom op for name 'FLOOR' with version 1
Didn't find custom op for name 'RSQRT' with version 1
Didn't find custom op for name 'FIFOQueueV2' with version 1
Didn't find custom op for name 'QueueDequeueV2' with version 1
Didn't find custom op for name 'SquaredDifference' with version 1
Registration failed.
The text was updated successfully, but these errors were encountered: