Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export as Savedmodel generating empty variables directory #1988

Closed
lionel92 opened this issue Jul 19, 2017 · 17 comments

Comments

@lionel92
Copy link

commented Jul 19, 2017

System information

  • What is the top-level directory of the model you are using:
    Object_detection
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    CentOS Linux release 7.3.1611
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below):
    ('v1.0.0-65-g4763edf-dirty', '1.0.1')
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:
    cuda 8.0/cudnn 5
  • GPU model and memory:
  • Exact command to reproduce:
    python object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path object_detection/models/model/rfcn_resnet101_pedestrain.config --checkpoint_path object_detection/models/model/train/model.ckpt-573563 --inference_graph_path object_detection/models/model/pedestrain --export_as_saved_model=True

Describe the problem

The above command can run with no errors. However it only generates a saved_model.pb file and an empty variables directory. According to https://tensorflow.github.io/serving/serving_basic.html, in the variables directory, there are variables.data-?????-of-???? and variables.index files.
Is this normal?
There is no problem to generate a model for inferrence and test.

@asimshankar

This comment has been minimized.

Copy link
Contributor

commented Jul 19, 2017

@jch1 can comment on the why of this, but it seems that the graph is "frozen" (i.e., all variables are converted to constant nodes in the graph), so there are no "variables" left in the computation and hence the directory is empty.

@lionel92

This comment has been minimized.

Copy link
Author

commented Jul 20, 2017

@asimshankar Thanks for your reply. I need the variables for serving, so what should I do to avoid this 'frozen' and not generate the empty directory?

@vitutc

This comment has been minimized.

Copy link

commented Aug 29, 2017

@asimshankar @jch1 @lionel92
I am also facing this same issue following the base tutorials and the provided "export_inference_graph.py" funtion. If the graph is frozen, is there a way to unfreeze it?

Can't find help anywhere, any advice is greatly appreciated!

@lionel92

This comment has been minimized.

Copy link
Author

commented Aug 29, 2017

@vitutc There is a simple way to solve this problem. Just modify the function "_write_saved_model" in "exporter.py" like this, using the default graph which is already loaded into the global environment instead of generating a frozen one. Don't forget to modify the caller's arguments.

def _write_saved_model(saved_model_path,
                       trained_checkpoint_prefix,
                       inputs,
                       outputs):
  """Writes SavedModel to disk.
  Args:
    saved_model_path: Path to write SavedModel.
    trained_checkpoint_prefix: path to trained_checkpoint_prefix.
    inputs: The input image tensor to use for detection.
    outputs: A tensor dictionary containing the outputs of a DetectionModel.
  """
  saver = tf.train.Saver()
  with session.Session() as sess:
    saver.restore(sess, trained_checkpoint_prefix)
    builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)

    tensor_info_inputs = {
          'inputs': tf.saved_model.utils.build_tensor_info(inputs)}
    tensor_info_outputs = {}
    for k, v in outputs.items():
      tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(v)

    detection_signature = (
        tf.saved_model.signature_def_utils.build_signature_def(
              inputs=tensor_info_inputs,
              outputs=tensor_info_outputs,
              method_name=signature_constants.PREDICT_METHOD_NAME))

    builder.add_meta_graph_and_variables(
          sess, [tf.saved_model.tag_constants.SERVING],
          signature_def_map={
              signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
                  detection_signature,
          },
      )
    builder.save()
@vitutc

This comment has been minimized.

Copy link

commented Aug 29, 2017

@lionel92
THANK YOU SO MUCH! A little detail but so hard for me as a beginner to piece together. I hope this helps other people as well! =D

@dnlglsn

This comment has been minimized.

Copy link

commented Sep 14, 2017

I created a script using the above modification from @lionel92 which should essentially replace export_inference_graph.py to export a graph which can be served from TensorFlow Serving.
https://gist.github.com/dnlglsn/c42fbe71b448a11cd72041c5fcc08092

@kerolos

This comment has been minimized.

Copy link

commented Dec 18, 2017

Hello Mr. @dnlglsn , could you please explain why we need the variables folder when we save the model?
Is it useful to use it for C++ or when we import a model to OpenCV c++?

@dnlglsn

This comment has been minimized.

Copy link

commented Jan 17, 2018

@kerolos I wanted to use a non-frozen model in Python and serve it via TensorFlow Serving. I guess the way models are saved expects a "variables" folder where the un-frozen variables are saved.

@alensaqe

This comment has been minimized.

Copy link

commented Feb 15, 2018

@dnlglsn you should make a pull request to add https://gist.github.com/dnlglsn/c42fbe71b448a11cd72041c5fcc08092 as a separate file to server unfrozen variables and not a frozen graph so for us that want to serve it via google cloud ml engine we use that exporter instead!

@mona-abc

This comment has been minimized.

Copy link

commented Apr 9, 2018

Hello @dnlglsn , I used your export_inference_graph.py in place of the original one, but i got the error:"TypeError: predict() missing 1 required positional argument: 'true_image_shapes' " , caused by row 109 in the script"output_tensors = detection_model.predict(preprocessed_inputs)"
my python version is 3.6. As a beginner, I don't know how to solve this problem, would you please tell me how to fix this, thank you.

@shobhitnpm

This comment has been minimized.

Copy link

commented Apr 23, 2018

Hi here is function write_saved_model in exporter.py,

def write_saved_model(saved_model_path,
frozen_graph_def,
inputs,
outputs):
"""Writes SavedModel to disk.
If checkpoint_path is not None bakes the weights into the graph thereby
eliminating the need of checkpoint files during inference. If the model
was trained with moving averages, setting use_moving_averages to true
restores the moving averages, otherwise the original set of variables
is restored.
Args:
saved_model_path: Path to write SavedModel.
frozen_graph_def: tf.GraphDef holding frozen graph.
inputs: The input image tensor to use for detection.
outputs: A tensor dictionary containing the outputs of a DetectionModel.
"""
with tf.Graph().as_default():
with session.Session() as sess:

  tf.import_graph_def(frozen_graph_def, name='')

  builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)

  tensor_info_inputs = {
      'inputs': tf.saved_model.utils.build_tensor_info(inputs)}
  tensor_info_outputs = {}
  for k, v in outputs.items():
    tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(v)

  detection_signature = (
      tf.saved_model.signature_def_utils.build_signature_def(
          inputs=tensor_info_inputs,
          outputs=tensor_info_outputs,
          method_name=signature_constants.PREDICT_METHOD_NAME))

  builder.add_meta_graph_and_variables(
      sess, [tf.saved_model.tag_constants.SERVING],
      signature_def_map={
          signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
              detection_signature,
      },
  )
  builder.save()

when I execute
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training/model.ckpt-59300 --output_directory DetectionModel

It creates a model named DetectionModel but that directory contains empty variables folder, please suggest appropriate solution

@dnlglsn

This comment has been minimized.

Copy link

commented Apr 23, 2018

@mona-abc I left the job I was doing this work for and we only had Python 2.7. I am sorry, but I can't help you with your Python 3.6 issue. My guess is the underlying API changed and now the script won't work without modification. Look through the notes at the top of the source code on this page. It should shed some light on the situation. https://github.com/tensorflow/models/blob/master/research/object_detection/core/model.py It looks like the preprocess function might be returning a tuple now, but I'm not sure.

@mona-abc

This comment has been minimized.

Copy link

commented Apr 24, 2018

@dnlglsn Thank you for your response, I changed my python version to python2.7 and exported the model successfully. But now I met another problem in calling the model using the client.py. The codes I mainly depend on are:"
Deploying Object Detection Model with TensorFlow Serving — Part 3 https://medium.com/@KailaGaurav/deploying-object-detection-model-with-tensorflow-serving-part-3-6a3d59c1e7c0", in this article he used the same way of saving the model as you. But there are some errors in using this client.py script, and I can't figure it out.
Would you please share the "client.py" you used in tensorflow serving for object detection? That will be very helpful to us tf learners, thank you.

@minfenghong

This comment has been minimized.

Copy link

commented Jun 11, 2019

hi,
Just share my case for others.
I need to assign "saver" parameter to fix "empty variables folder" issue.

builder = tf.saved_model.builder.SavedModelBuilder('models/siam-fc/1')

builder.add_meta_graph_and_variables(
      model.sess,
      [tf.saved_model.tag_constants.SERVING],
      signature_def_map={
          'tracker_init': model_signature,
          'tracker_predict': model_signature2
      },
      **saver=model.saver**
)
builder.save()
@hackeritchy

This comment has been minimized.

Copy link

commented Jun 12, 2019

@minfenghong hi, where did you get the model and model.saver?

@minfenghong

This comment has been minimized.

Copy link

commented Jun 12, 2019

@minfenghong hi, where did you get the model and model.saver?

Hi,
We have saver when we restore the model from checkpoint files
e.g.
https://github.com/bilylee/SiamFC-TensorFlow/blob/master/inference/inference_wrapper.py
saver = tf.train.Saver(variables_to_restore_filterd)

@wronk

This comment has been minimized.

Copy link

commented Jul 5, 2019

I was also struggling with this, and was able to export a model (include the variables file) by updating @lionel92's suggestion for the current OD API code version (as of July 2). It mostly involves changing the write_saved_model function in models/research/object_detection/exporter.py

While this works, it's definitely a hack. @asimshankar, I think we should reopen this issue (or maybe #2045) -- I think it'd be valuable for the community to have a "proper" way to export models for inference w/ TF Serving.

Update write_saved_model in exporter.py

def write_saved_model(saved_model_path,
                      trained_checkpoint_prefix,
                      inputs,
                      outputs):

  saver = tf.train.Saver()
  with tf.Session() as sess:
    saver.restore(sess, trained_checkpoint_prefix)

    builder = tf.saved_model.builder.SavedModelBuilder(saved_model_path)

    tensor_info_inputs = {
        'inputs': tf.saved_model.utils.build_tensor_info(inputs)}
    tensor_info_outputs = {}
    for k, v in outputs.items():
      tensor_info_outputs[k] = tf.saved_model.utils.build_tensor_info(v)

    detection_signature = (
        tf.saved_model.signature_def_utils.build_signature_def(
            inputs=tensor_info_inputs,
            outputs=tensor_info_outputs,
            method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
        ))

    builder.add_meta_graph_and_variables(
        sess,
        [tf.saved_model.tag_constants.SERVING],
        signature_def_map={
            tf.saved_model.signature_constants
            .DEFAULT_SERVING_SIGNATURE_DEF_KEY:
                detection_signature,
        },
    )
    builder.save()

Update _export_inference_graph in exporter.py

Then, within the _export_inference_graph function, update the final line to pass the checkpoint prefix like:

  write_saved_model(saved_model_path, trained_checkpoint_prefix,
                    placeholder_tensor, outputs)

Call export script

Call models/research/object_detection/export_inference_graph.py normally. For me, that looked similar to this:

INPUT_TYPE=encoded_image_string_tensor
PIPELINE_CONFIG_PATH=/path/to/model.config
TRAINED_CKPT_PREFIX=/path/to/model.ckpt-50000
EXPORT_DIR=/path/to/export/dir/001/

python $BUILDS_DIR/models/research/object_detection/export_inference_graph.py \
    --input_type=${INPUT_TYPE} \
    --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
    --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
    --output_directory=${EXPORT_DIR}

If it works, you should see a directory structure like this. This is ready to be dropped into a TF Serving Docker image for scaled inference.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.