-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow serving Keras model #310
Comments
Have you tried deleting: |
Yes, I have tried and I'm getting a similar error. The code is still failing when calling model_exporter.export(...). |
I've also tried to adapt the mnist_saved_model.py, but I'm also getting erorrs. Code:
Error:
|
I don't think so, I'm guessing you have an issue with the sessions. Here is an example I adapt to export a Keras Model, maybe it can help you. |
@viksit the section 4 of tutorial is broken btw |
I managed to export a Keras model for Tensorflow Serving (not sure whether it is the official way to do this). My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. Here is how I did (I put in separate boxes to help understanding and because I use Jupyter :)): Creating the model import keras.backend as K
from keras.applications import VGG19
from keras.models import Model
# very important to do this as a first thing
K.set_learning_phase(0) model = VGG19(include_top=True, weights='imagenet')
# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights) Exporting the model from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'images': new_model.input},
outputs={'scores': new_model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save() Some side notes:
In case you're curious about the client side, it should be similar to the below one. I added some extra things to use Keras methods for decoding predictions, but it could also be done in the serving side: request = predict_pb2.PredictRequest()
request.model_spec.name = 'vgg19'
request.model_spec.signature_name = 'predict'
request.inputs['images'].CopyFrom(tf.contrib.util.make_tensor_proto(img))
result = stub.Predict(request, 10.0) # 10 secs timeout
to_decode = np.expand_dims(result.outputs['outputs'].float_val, axis=0)
decoded = decode_predictions(to_decode, 5)
print(decoded) Hopefully it will help someone :) |
@tspthomas I have tried to use your guide, but I'm getting the client side error: grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input tensor alias not found in signature: images") Do you have any idea how I can solve this issue? |
Hi @azagovora ! Well, it seems that I made a mistake in the code. Could you try to change from "images" to "inputs" in the client code?
to
I think the only problem is that you need to make the input signature match. Let me know if that solves your problem. |
Hi @tspthomas! Thank you for your quick reply to my question. I have corrected my code but I'm getting the same error message: grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input tensor alias not found in signature: input") |
Hi @azagovora! Sorry for the dumb question, but you put 'inputs' or 'input' in the client code? It looks like you put 'input' per the error message, so you'd need to change accordingly. If it is correct, could you please paste your export code and your client code? |
Sorry, it was my mistake, I put 'input' instead of 'inputs'. grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Output 0 of type string does not match declared output type float for node _recv_input_1_1_0 = _Recvclient_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=92768290196530094, tensor_name="input_1_1:0", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"") It looks like the export problem. Here is my export code: import os from keras.applications.inception_v3 import InceptionV3 from tensorflow.python.saved_model import builder as saved_model_builder #from inception_v3_finetuning_v2 import load_trained from os.path import join as join_path tf.app.flags.DEFINE_string('output_dir', '/tmp/inception_output', def export():
def main(unused_argv=None): if name == 'main': |
Hello @azagovora. It seems to me that is something in your client code not on the export. I think that you should review the way that you're reading the image. In the code I put here, I'm reading the image with Keras methods and passing as a float array to the input. It might be the case that you're reading it as a binary string and this is why you're facing the error. If you're following the Inception v3 sample code, you need to change the way you read the image and you can use Keras default methods for that. In my case, I created a method to read and pre-process the image:
Please review the target size and other information and change it according to the model you're using. In case you still didn't find any issue, please paste your complete client code. |
Hi @tspthomas! It works now. Thank you very much for your help! |
@tspthomas awesome ! Yours was the only help I found for converting the keras model to tensorflow using the saved_model_builder. Thanks ! |
@azagovora @ashavish You're welcome. I'm glad that it helped :) |
@GeorgianaPetria how to add new model (like you add keras model) to serving? |
@tspthomas Thank you so much good sir! This is the only good explanation of how to do this with Keras models. Thank you again! Cheers, |
Hi @tspthomas, Where do you put your preprocessing method? (is it in the client, or where?) Thank you! Dylan |
@dylanrandle what kind of pre-processing are you doing? tf ops, or normal python? |
Hello @dylanrandle ! Well, in this simple example I put in the client side (I used Keras default methods for this). Some of the pre-trained models in Keras have the preprocess function in the same file (e.g. InceptionV3 - IMHO, I think that this part could be handled by the servable because the client shouldn't need to be aware of specific preprocessing. I noticed that there are some efforts to add this kind of code within the graph, where one of the nodes prior to the first layers of the network are for preprocessing. If you take a look at the default code for Inception V3 in examples folder ( Not sure if this helps :) Regards, |
Hi @viksit, It is normal python preprocessing. E.g. converting characters to vectors. Thank you. Hi @tspthomas, Yes thank you. That is very helpful. So there are essentially 3 options it sounds like: python in the client, in the same file, or in the graph itself? Thank you! PS @viksit @tspthomas What would you recommend for best performance? Thank you. Best, |
Hi @dylanrandle ! Sorry for the delay. I thought I had answered this question... I didn't understand what you mean by the same file (I mean, which file you're mentioning). But the other two ones are true. I'm not the best person to talk about performance (mainly without measuring anything), but I think that placing in the graph should be a good idea. It would be interesting to measure the difference, but I don't think that it has too much impact in performance. Of course, it depends on the kind of preprocessing you're doing. I'm assuming something like subtracting means of channels, which is very optimized in Numpy-like libraries. If you think about other types of preprocessing (e.g. for NLP, where you can have mapping to dictionaries, etc), the results could be different. Since it should be easy to evaluate, my advice would be to test and check which one is suitable for your scenario. And if you get any results, please share with us :) |
It is important to note, that when you are exporting a model, if you use |
Hello Friends, 2 Questions:
I can do something like result.outputs but that just returns a protobuf MessageMap, and I still can't get out the float vals. Any help greatly appreciated guys. Thank you! Cheers, |
1) how are you importing K?
2) this is a protobuf format. are you parsing using the correct libs?
…On Thu, Jun 29, 2017 at 12:09 PM, dylanrandle ***@***.***> wrote:
Hello Friends,
2 Questions:
1. @ipoletaev <https://github.com/ipoletaev> I have tried using both
K.set_learning_phase(False) and K.set_learning_phase(0) and both times when
I load my model I get model.uses_learning_phase = True:
`from tensorflow.contrib.keras.python.keras import backend as K
from tensorflow.contrib.keras.python.keras.models import load_model
K.set_learning_phase(0) # "test" mode
MODEL_NAME = input('Input the model path:')
model = load_model(MODEL_PATH)
print('Loaded model succesfully.')
if (model.uses_learning_phase):
raise ValueError('Model using learning phase.')`
1. @tspthomas <https://github.com/tspthomas> After I run
result = stub.Predict(request, 10.0)
I get a PredictResponse object back but I don't know how to get out
the float_vals?
outputs { key: "outputs" value { dtype: DT_FLOAT tensor_shape { dim {
size: 1 } dim { size: 20 } } float_val: 0.000343723397236 float_val:
0.999655127525 float_val: 3.96821117632e-11 float_val: 1.20521548297e-09
float_val: 2.09611101809e-08 float_val: 1.46216549979e-09 float_val:
3.87274603497e-08 float_val: 1.83520256769e-08 float_val: 1.47733780764e-08
float_val: 8.00914179422e-08 float_val: 2.29388191997e-07 float_val:
6.27798826258e-08 float_val: 1.08802950649e-07 float_val: 4.39628813353e-08
float_val: 7.87182985462e-10 float_val: 1.31638898893e-07 float_val:
1.42612295306e-08 float_val: 3.0768305237e-07 float_val: 1.12661648899e-08
float_val: 1.68554503688e-08 } }
I can do something like result.outputs but that just returns a protobuf
MessageMap, and I still can't get out the float vals.
Any help greatly appreciated guys. Thank you!
Cheers,
Dylan
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#310 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMIDe_Nlt5Rdo2-KSpYqoCiNUp4Y0STks5sI_ZbgaJpZM4L24zi>
.
|
Hey Viksit,
Thank you! |
@dylanrandle See keras-team/keras#2310 - sometimes, there can be python import issues. Try importing K from the layers core and retrying to see if that works. If it does, there may be something wrong in the way the imports are being processed (in order). See how to use GRPC via examples/docs. Something like |
@dylanrandle , you could also try to do something simple similar to this: Import Keras function to decode predictions (if you want to) Since the outputs are like a dictionary, you can access it simply by
Please observe that the name of the outputs and keys may change depending of how you structured things. |
@tspthomas Thank you so much! result.outputs['outputs'].float_val works! @viksit This import shenanigans only happens after I upgraded to tensorflow 1.2 btw. I tried importing from layers core and it did not fix. |
I've tried to reproduce the steps described here to export a trained resnet50(from scratch) model from keras.applications but the tensorflow serving outputs random predictions and is very slow(4s/7s). I managed to export a Squeezenet 1.1 model in the same way but the tensorflow serving keep returning wrong values (p.s it returns the correct shape of course) :/ |
@mauri870 I had the same issue with tensorflow serving outputs random predictions until I fixed an error in my image preprocessing functions. |
Hi everyone, I have exported my model through this way and it works, so thanks a lot for the helpful posts here! One problem I am having now is that what if I want to do some data preprocessing in the export.py itself? Thanks! |
@wengchen1993 Why would you preprocess data in your export? I think you should be preprocessing either in your client.py or in the graph itself? (The latter will require you to re-export your model). |
@dylanrandle Hi, yeah I have been doing that in my client.py but just wondering is it possible to merge that with export.py so that an user only has to send raw data and export.py can just do a little preprocessing before passing the preprocessed data into the model. Then again I suppose I can just let users send data right into client.py before passing it to export.py (if I intend to keep the preprocessing steps and model as a blackbox). |
@tspthomas ...What is predict_pb2...as I am getting the Below error ..
also can This Protobuff file be used for android... |
Hello all, but mainly @ipoletaev, regarding K.set_learning_phase(False) vs. K.set_learning_phase(0). I have used K.set_learning_phase(False) with Tensorflow 1.1 and indeed my accuracy numbers seem correct for the test phase. So I think it works. But I am confused how this can work? In the backend documentation (even in Tensorflow 1.1 project), it shows that 0 or 1 are the only legitimate values for Thanks. |
@tspthomas Hi Thomas, I followed your process. I am getting this error with the current code listed below:
Error:
It's weird because my inputs and outputs have the ff formats and clearly have dtypes. Unless there's a problem with having 2 types of inputs. INPUTS:
OUTPUTS:
Hope you, or anyone can help me. Thanks! ***UPDATE: RESOLVED. I just figured it out. Change this line:
to:
|
I am now getting this error with the code above. Hoping someone can help me with this. Exporting trained model to serving/3
|
Hi @franciscogmm, having no assets to save/write is not an error (just means it wasn't supplied as part of the calls to add_meta_graph_and_variables or add_meta_graph). The first call to the builder requires a session with the variables, etc. Are you calling the builder within the scope of the session in the updated code as well? |
Hi @sukritiramesh , Yes. The code is currently like this:
It's giving me the error above. However, when I put this line at the start of the entire code:
I got this error:
|
It finally worked. I think there must've been a problem with the model that I was calling in the signature. I followed another example, which used the more traditional way of loading up a model from keras (using H5 and json) and it worked. I think the problem before was the model I was calling wasn't compiled. Here is the link. https://github.com/krystianity/keras-serving |
Hi everyone, I followed the steps here to 1. export a model as a .pb file in savedModel format for TensorFlow serving and then build the gRPC Client. I have the very weird behavior, that no the model always predicts exactly the same class (no matter what image I take as input). I'm unsure if my error is on client side or if the export is somehow wrong. Did anybody have the same issue? EDIT: I guess the images didn't get preprocecessed using imagenet_utils during training and thus it shouldn't be used during inference, but I'm not completly sure at this time. Here the code I use for the export:
And the Client:
|
Hi @tspthomas , error message: |
Hello @cchung100m ! It's been a while since I posted this suggestion, so things might have changed :) Anyway, looking at this error, it seems that 'ClipByValue' operation (which might be used by your model) is not available in the current Tensorflow version you're using to run with Tensorflow Serving. I took a quick look at TF's code and it seems this operator was added in this commit here (and it seems this operation is only available starting on TF 1.8): tensorflow/tensorflow@083cf6b I'd recommend you to try to change the TF version that you used to build TF Serving with to a more recent one and test. If that's not possible, I'd try to downgrade the Keras version you're using and re-export the model. Another possibility is to downgrade your versions. I believe the problem could be either that the model implements something not available at your current TF version or some bug with the current TF Serving version you're using. It also seems that more people are facing a similar issue here (no answer yet, but can help to confirm the versions): tensorflow/tensorflow#19822 Hope that helps! Regards, |
Hello @cchung100m I never got this error though. Hope this helps |
Hi @tspthomas @R-Miner, Thank you for the prompt reply.
|
How to use an appointed GPU to run tensorflow serving with docker.I dont't want to take uo all gpus when running tf serving.So does anybody know?In addition, my model is writen by tf.keras. |
I am trying to convert my Keras graph to a TF graph.
I managed to run the provided tensorflow_serving examples, but I'm having issues to run my custom model.
Here is my code:
`
import tensorflow as tf
from keras import backend as K
from tensorflow.contrib.session_bundle import exporter
def export_model_to_tf(model):
K.set_learning_phase(0) # all new operations will be in test mode from now on
# serialize the model and get its weights, for quick re-building
`
This is the error I am getting:
Do you know what could cause the Saver to fail?
Thanks!
The text was updated successfully, but these errors were encountered: