New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFLite toco failed to conver quantized model ( mobilenet_v1_1.0_224 ) to tflite format #19431
Comments
I have the same problem , any updates please? |
See freedomtan's comment. transform_graph is not supported by TF Lite. See the guide for retraining. If you are looking into weight-only quantization (and are well aware of the possible accuracy degradation) you can pass --quantize_weights=true to toco. |
@cefengxu have you solved the problem? |
@andrehentz did you mean the following when you said, "If you are looking into weight-only quantization (and are well aware of the possible accuracy degradation) you can pass --quantize_weights=true to toco." Please help me understand this better.
Please help me solve this error ?
I found the possible explanation for the issue here in #15336 System information
This is the model, I have been trying to convert to .lite format |
@SubhashKsr I think what @andrehentz referred to is weights-only post-training quantization. See the guide and the example |
TFLite conversion can be done using SavedModel, i have given the link to the model below. This is as per the documentation here. I have a model here, which is exported SavedModel using following code:
But while converting i am too getting the same error, in this case i am not freezing the model by myself. nor qualitizing it. Below code to convert SavedModel to TFLite:
Logs:
|
@milinddeore Did you find a solution to this issue? |
@andrehentz But is the freeze_graph script supported? Do you what step causes this error to be thrown. Can you provide me some internal context on why this error might be occurring? |
Working on the same issue, would you be interested in quickly solving this problem over slack/appear/whatsapp (real time tools) ? |
@andrehentz can you please reopen this issue? Thanks. |
I could able to convert FaceNet We will quantise pre-trained Facenet model with 512 embedding size. This model is about 95MB in size before quantization.
create a file
Run this file on pre-trained model, would generate model for inference. Download pre-trained model and unzip it to model_pre_trained/ directory.
FaceNet provides
Once the frozen model is generated, time to convert it to
Let us check the quantized model size:
Interpeter code:
Interpeter output:
Hope this helps! |
Closing per the latest comment, glad you got it working! |
@milinddeore --Wouldn't the inference graph that will be produced (using the code you mentioned) corresponds to the pre-trained Facenet Protobuf file. If I want to deploy TFlite model on a IOS, it has to be trained on the training images I provide; which means I need a Tflite version of the trained model. I have gone through #23253, where it is mentioned that Training graph can't be converted. As per my understanding, on IOS, I require the trained model and its corresponding TFlite file. How can a pretrained model's .pb , converted to tflite version, will be used to predict my data? |
@Monikasinghjmi, it's hard to tell exactly what you're asking, but it doesn't sound like it's a bug/issue, and is probably best suited as a question on StackOverflow. |
Describe the Problem
Firstly, I download the mobilenet_v1_1.0_224 model from ( http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz ) ;
Then, i used Command belown to get a quantized model ( mobilenet_v1_1.0_224_frozen_quantized_graph.pb ) successfully.
However , when I used TFLite toco Command to convert .pb to .lite format but ERROR was output
TFLite toco Build Command:
ERROR OUTPUT:
ERROR LOG:
The text was updated successfully, but these errors were encountered: