Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to add new model to the serving? #452

Closed
tbchj opened this issue May 25, 2017 · 10 comments
Closed

How to add new model to the serving? #452

tbchj opened this issue May 25, 2017 · 10 comments

Comments

@tbchj
Copy link

tbchj commented May 25, 2017

now I have a request. I want to add other model net to the serving. someone like incetion_v4, resnet or any one else. but how can I do ? Should I change or add some code to the source code of serving ?
I need some help.

@kirilg
Copy link
Contributor

kirilg commented May 25, 2017

You can add multiple models by using a ModelServerConfig at startup time. See this tutorial for some documentation.

tl;dr: when you run the model_server, run using something like this:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_config_file=<path_to_your_config_on_disk>

Where the config should be a file on disk that looks like this:

model_config_list: {
  config: {
    name: "Model1",
    base_path: "/path/to/model1",
    model_platform: "tensorflow"
  },
  config: {
    name: "Model2",
    base_path: "/path/to/model1",
    model_platform: "tensorflow"
  },
}

changing the name and base_path fields appropriately. You can also of course include more than 2 models there.

@tbchj
Copy link
Author

tbchj commented May 26, 2017

what you said is only for how to run server, but I want to know how to build the new model?
for example : how to add one model both for bazel building and running.
if there is some example to learn ? thank you.

@sukritiramesh
Copy link
Contributor

Hi @tbchj, can you share a bit more about your use-case and/or setup? Are you looking for examples on how to export new models? TensorFlow Serving uses the SavedModel format, the documentation for which is at: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model
For API details on how to export SavedModels, the documentation is at: https://www.tensorflow.org/api_docs/python/tf/saved_model

@tbchj
Copy link
Author

tbchj commented May 27, 2017

I'm sorry, your guys did not understand mine. Maybe because of my bad English expression.
My question is:
I have already download serving code, Do what the official said: https://tensorflow.github.io/serving/serving_advanced

now I can run inception server and mnist server which already in the serving package. and the clients both of inception and mnist example all running ok.

but now, I want to add some model like incetion_v4, resnet or other models except the mnist and inception which already in the serving package. I think tf-serving can provide many model server not just mnist and inception which already in serving. right?

I guess if I add new models to serving, I will modify or add some proto files in the serving ,and if running, also need to building the new model I added.
so, do you understand what my question is ? my question is how to add new model like mnist which already in the serving ,can provide mnist model server ? understand? how to add new model or my own model in serving and what can I do? I want the new model (mine) can provide server like mnist server?

@kirilg
Copy link
Contributor

kirilg commented May 30, 2017

There are two parts to serving a new model:

  1. Exporting a model - for that you can look at the links Sukriti provided, specifically SavedModel readme and API doc.
  2. Loading it in a ModelServer - the model server can load any SavedModel from disk, so you can either change the model_base_path flag when running the model server to point it to your new export, or use a config file like I mentioned in my first response.

I get the impression the issue you're describing is with exporting. For that please take a look at the SavedModel links above. One helpful approach is to look at the inception example we provide in our repo already (https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/inception_saved_model.py) and see how the tensorflow.python.saved_model libraries are being used there, and replicate the same exporting logic in your other models like inception_v4 or resnet.

@com9009
Copy link

com9009 commented May 31, 2017

Can i get more information about signature_def?
And i want to know how api works.

@sukritiramesh
Copy link
Contributor

Hi @com9009, you can find SignatureDef documentation here:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/signature_defs.md

@com9009
Copy link

com9009 commented May 31, 2017

Thank you @sukritiramesh

@tbchj
Copy link
Author

tbchj commented Jun 1, 2017

ok, I'll try. thank you @kirilg @sukritiramesh

@gyang274
Copy link

@tbchj don't know how much you got. i recently wrote down how i added inception-v4 and inception-resnet-v2 into tensorflow_serving, https://gyang274.github.io/docker-tensorflow-serving-slim/0x02.slim.html, might help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants