-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Unable to convert TF_slim model to IR format #3787
Comments
Hi @varunjain3, thanks for reaching out. I am trying to access your model files but unable to, I just requested access. Allow me to investigate once I can see your model files. Regards, |
Hi @avitial , https://drive.google.com/folderview?id=1plG4GE3qLLtB35om4gcKTx8YQgM6LqUT This link might help, let me know. I haven't set any restrictions on the sharing settings as such |
@varunjain3 I can access your model files now, may I ask how did you freeze the model? Glancing at final.pb I can't find an input layer. |
@avitial I tried several approaches, all of them had the same issue.
I did this, as I wasn't able to build the bazel tensorflow :/ |
@varunjain3 thanks for clarification, unfortunately I am unable to convert this specific model. Let me check with my peers and see if they have any insights on this. If you don't mind please include the freeze_graph.py script used as well as the flags. Also a link to the base InceptionV3 model used for fine tuning, I assume you got it from tf_slim. Regards, |
@avitial I have added the freeze_graph.py in the same drive folder for your reference, the file works with Tensorflow > 2.0 as per my observation. Here are the flags I used with freeze_graph.py - python freeze_graph.py --input_graph=$(TRAIN_DIR)/all/graph.pbtxt \
--input_checkpoint=$(TRAIN_DIR)/all/model.ckpt-3000 \
--input_binary=false \
--output_graph=$(TRAIN_DIR)/all/frozen_graph.pb \
--output_node_names="InceptionV3/Predictions/Reshape_1" Yes, I have closely followed all the steps from the tf_slim official repository, just changed the dataset with mine. |
@varunjain3 The issue is coming from the first layers in the frozen graph: prefetch_queue/fifo_queue and fifo_queue_Dequeue. They should not be there, so I think something went wrong in freezing. Our documentation shows how to freeze and convert TF-Slim models: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models.html . First, make sure that your steps are similar to those. Second, in this link and the one you had from our forum, there is a step to show the topology (summarize_graph.py). Could you output the result? The current graph does not have "input" as actual input but prefetch_queue/fifo_queue. |
@sevhabert, I was using the same documentation initially, but the document only contains a reference to convert the pre-trained checkpoint of the inceptionV1. As per the issues raised by others on the same problem, I found the forum insights useful and thus stuck to them closely.
When I tried running the summarize_graph.py, I got the same results as you, which says that the input layer is prefetch_queue/fifo_queue. As the documentation is quite old on the OpenVINO site, it would really help, if there was a step by step guide to convert the custom-trained model for the new Inception models. |
If possible, could you share a guide to make tf records from a custom dataset of images? I m not sure if that should be the cause of this problem though |
I tested the summarize_graph on your model frozen.pb using TF 1.15 and actually I get an error due to TF mismatch. The graph has some attributes only present in TF 2.0: onnx/tensorflow-onnx#862 . Mixing the TF version for the training (TF 1.15) and the freezing (tf 2.0) should be avoided and can only be source for error. Can you freeze your model with TF 1.15 ? |
Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic. ~Luis |
System information (version)
Detailed description
I want to use my custom fine-tuned InceptionV3 classification model to work with dlstreamer,
I was following this reference and this reference
I completed all the steps as per the TF_slim repository. After using the freeze_graph.py file I was able to freeze my model, file size being ~85mbs, exactly similar to this post.
Now I'm not really sure how I can I use mo_tf.py to convert my frozen model to the .XML and .bin format.
I tried these commands and all of them failed:
Extra Details:
I used the master branch of tensorflow/models repo commit =
7786b741e3ee5692819264568341dfc84f6b07b7
to fine-tune the pre-trained model with TF1.15Used tf2.1 to convert the frozen model using freeze_graph.py
Using the current development docker image of the dl_streamer for OpenVINO environment
Here are my model files and the frozen model - https://drive.google.com/drive/folders/1lWHnrEos6fy0hE7qr8o832sg1v0cA2zF?usp=sharing
Issue submission checklist
The text was updated successfully, but these errors were encountered: