Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing trained model to OpenCV #149

Closed
vincentcaux opened this issue Jan 7, 2018 · 6 comments
Closed

Importing trained model to OpenCV #149

vincentcaux opened this issue Jan 7, 2018 · 6 comments

Comments

@vincentcaux
Copy link

Has anybody had any success importing the trained model to opencv?
I have looked around and it seems opencv requires the inference graph and weights as protobuf files:
https://github.com/opencv/opencv/blob/3.4.0/samples/dnn/mobilenet_ssd_python.py
Some suggest that we can convert the model ckpt files to pb using freeze_graph, but I have tried and it seems to require the graph in pbtxt format, which in my case is not in the training files (in RUNS). I only have the .meta, .info, and .data files. Any help would be appreciated!

P.S. While I have a lot of experience with OpenCV, I am a beginner with tensorflow and kittiseg.

@obendidi
Copy link

obendidi commented Jan 7, 2018

check google object detection API , they have a couple of scripts there to convert .ckpt and .meta to .pb and .pbtxt files , you just need to tweak them to fit the input/output of kittiseg

@vincentcaux
Copy link
Author

Thanks for the prompt reply!
I did find export_inference_graph that could be used, but it requires a config file which I don't have. I imagine this config file can be derived from one of the model files (architecture?).
Here is the reference to export_inference_graph:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md

@vincentcaux
Copy link
Author

I was able to modify the demo.py script, which of course reads the model in, and save the binary protobuf via this command:
tf.train.write_graph(sess.graph_def, '', 'graph.pb', as_text=False)
Now I'm getting unknown layer errors which I think can be fixed with freeze_graph.

@vincentcaux
Copy link
Author

@Bendidi I noticed you wrote on July 31st that you froze the graph. What output layer name did you use? After a lot of research online I found that "softmax" might be the case, but both
minimal_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["softmax"])
and
python freeze_graph.py --input_graph graph.pb --output_graph graph_out.pb --frozen_graph True --output_names softmax
fail with "no field named softmax". Any pointers?
Thanks!

@dpattison3
Copy link

@vincentcaux I was working on this last week. I can give you my scripts that I used to save you the trouble.
Here's one to save the model: https://gist.github.com/dpattison3/0b08002479d4e4f8eb98d6cc55f500a5
and here's an example of how to load and run the model: https://gist.github.com/dpattison3/26bf10fabc0dc08c4b19920c2330e39b

I am new to this as well, so no guarantee that I'm doing this right.

@vincentcaux
Copy link
Author

@dpattison3 thanks for your reply! Your conversion script worked with no issue. Unfortunately, I am trying to load the graph with OpenCV, and there seems to be an (at least 1) unimplimented layer of bype ExpandDims which prevents me from doing so. My solution will be to use the TensorFlow API in C++ to load and run, converting between cv::Mat and tf::Tensor as required.
The remaining issues are beyond the scope of this thread, so I will close it. Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants