Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[deeplab + cityscape] Frozen inference graph provided is slower than a self-exported graph. #4525

Closed
cheneeheng opened this issue Jun 13, 2018 · 4 comments
Assignees

Comments

@cheneeheng
Copy link

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • What is the top-level directory of the model you are using: deeplab
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 1.7.0
  • Bazel version (if compiling from source): NA
  • CUDA/cuDNN version: CUDA 9.0 / cuDNN 5.1
  • GPU model and memory: Quadro M4000
  • Exact command to reproduce: NA

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

  1. Ran inference on cityscapes image using the frozen inference graph provided and got a run time of ~3.7s
  2. Ran inference on cityscapes image using the frozen inference graph exported using the checkpoint provided and got a run time of ~0.9s
    (Although both are running less than the 5s runtime given in model_zoo.md)

I have used the official codes to do the export. Arguments passed to it is shown below :
--checkpoint_path="/path/to/model.ckpt"
--export_path="/path/to/frozen_inference_graph.pb"
--model_variant="xception_65"
--atrous_rates=6
--atrous_rates=12
--atrous_rates=18
--output_stride=16
--decoder_output_stride=4
--num_classes=19
--crop_size=1025
--crop_size=2049
--inference_scales=1.0

The only change i made is pulling the inference code out of ipynb, added time.time() for timing, and added some util function to loop through the directory of images.

A quick check with tensorboard shows that my exported graph has just 1216 nodes compared to the 1311 nodes in the graph provided.

Question:

  1. Is the difference in runtime due to my export argument being wrong?
  2. Is the frozen graph created from another checkpoint leading to the difference in runtime?
  3. Am i missing something here?

Thank you.

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

@amalF
Copy link

amalF commented May 16, 2019

Hello,
I'm facing the same issue. Did you find answers to your questions ?
Thanks

@aptlin
Copy link

aptlin commented Jun 10, 2019

The CityScapes frozen graph model converted to tfjs tends to consume substantially more resources than its counterparts (PASCAL and ADE20K). I wonder whether this might be related to this issue. Any ideas why this happens?

@tensorflowbutler
Copy link
Member

Hi There,
We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing.
If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.

@devvikas
Copy link

@cheneeheng How did you export the model using the existing checkpoints ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants