New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ways to freeze RetinaNet to a .pb file? #125
Comments
same issue how to make inference |
I also want to run inference with one of the trained models and I added the following code in the retinanet_main.py main function after the train and eval if checks:
I am trying to run this on a GeForce 1080 with 12 Gb or RAM and using a single tf records file of about 150 MB but afte running for about a minute or so I get a CUDA_ERROR_OUT_OF_MEMORY error and I can't figure out what I am doing wrong or what params should I pass to the inference method of the Estimator in order to work... |
The actual problem seems to be that in the model function retinanet adds the image in the prediction dictionary, so if you run the above code over a large number of images, it will store all of them in memory, thus resulting in the out of memory error. The solution was to run predictions over a smaller number of images and/or to remove the image from the predicted output. So, if you are looking for a way to run predictions with trained checkpoints from the tpu retinanet, the code above should allow you to do that without the need to convert the checkpoints to pb files. |
same issue |
@ernstgoyer a starting point would be to look at how that is done in their evaluation code. |
Hi, @xiaoyongzhu I am facing the same issue. I have Retinanet implementation using tensorflow eager, I'm able to do the training after that it is saving the .ckpt files but I'm not finding the way how to use the model for inference? need sugetions on the same thank you. |
Hi @xiaoyongzhu thank you for replay. can you pls send me the link for TensorFlow Object Detection API with Retinanet added ? it will be helpful. thanks. |
Closing the issue as it seems this has been resolved. |
Is there a way to freeze RetinaNet checkpoint to a .pb file for further referencing, after it got trained? From my limited knowledge, there are two ways to convert a checkpoint to a .pb file in TF, which are all impossible to convert the trained RetinaNet model to .pb file.
use the
freeze_graph
tool by TensorFlow, as described here (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py). However this command requires to specifyoutput_node_names
parameter which is hard to get for RetinaNet by analyzing its graph or using thesummarize_graph
provided (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs). Thesummarize_graph
tool will give over 1,000 possible names.Use the
export_inference_graph
tool provided by the Object Detection API (https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py), which requires the model definition, which does not exist yet for RetinaNet.So my question is - what's the best way to freeze the trained RetinaNet model to a .pb file for further inference?
The text was updated successfully, but these errors were encountered: