New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
{ "error": "Serving signature name: \"serving_default\" not found in signature def" } #275
Comments
@kspook How do you test the saved model:) |
tensorflow_model_server --port=9000 --rest_api_port=9001 --model_name=crnn --model_base_path=/home/kspook/CRNN_Tensorflow/model/crnn_syn90k_saved_model curl -X POST http://localhost:9001/v1/models/crnn:predict -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{ |
@kspook I think you're supposed to modified the code from tools/export_saved_model.py to fix this:) |
I found it with saved_model_cli. ^^ Thanks, anyway ^^
|
@MaybeShewill-CV , so you never tried to run tensorflow serving for this model? I want to try, but placerholder must be string type , but model input_tensor is float32. I also asked this issue at tensorflow serving. ( tensorflow/serving#994 (comment) ) In case of flask, input_tensor (float32) works well. Do you know about this? |
@kspook The saved model was not used in tensorflow serving in my local environment==! |
I am trying to make tensorflow serving, but I have error, because 'saved_model' doesn't have name for output. But, I can't change the code. T.T can you help me?
3.test script.
|
@kspook I have used tensorflow saved model for tensorflow serving here https://github.com/MaybeShewill-CV/nsfw-classification-tensorflow you may check if you can get something useful. I will see this problem too once I got spare time:) |
Thank you, This nsfw-classification-tensorflow model has a name for output['prediction'] tensor_info In decodes, _= tf.nn.ctc_beam_search_decoder() in CRNN, Some posting suggested to create to builde the TensorProto for the Sparse Tensor separately. However, I can't get the evidence of correct answer.
|
I was able to get this model to partially work with tf-serving, by decomposing the decodes[0] SparseTensor into indices, values, and dense_shape. This snippet explicitly names the output of the individual parts of the decodes[0] tensor (see tensorflow/tensorflow#22396). I then modify # from export_saved_model.py#85
indices_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].indices)
values_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].values)
dense_shape_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].dense_shape)
[...]
# from export_saved_model.py#111
# build SignatureDef protobuf
signatur_def = sm.signature_def_utils.build_signature_def(
inputs={'input_tensor': saved_input_tensor},
outputs = {
'decodes_indices':indices_output_tensor_info,
'decodes_values':values_output_tensor_info,
'decodes_dense_shape':dense_shape_output_tensor_info,
},
method_name=sm.signature_constants.PREDICT_METHOD_NAME,
)
# add graph into MetaGraphDef protobuf
saved_builder.add_meta_graph_and_variables(
sess,
tags=[sm.tag_constants.SERVING],
# line below adds 'serving_default' to signature def
signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signatur_def},
) This then works on some of my example images, but seems to segfault on others without a stack trace. I think the problem has to do with the dynamic sizing of the bidirectional dynamic RNN (could there be a fixed batch size somewhere?). Happy to turn this into its own ticket for discussion. Cheers |
Btw, since someone might run into the same segfault issue, I solved it. Turns out using tf-serving out of the box with this model will cause segfaults on images with a width over 100. Changing the __C.ARCH.INPUT_SIZE = (None, 32) # synth90k dataset |
@eldon , How did you change
|
@kspook I ended up adding my own function to def unpack_sparse_tensor_to_str(self, indices, values, dense_shape):
values = np.array([self._ord_map[str(tmp) + '_index'] for tmp in values])
number_lists = np.ones(dense_shape, dtype=values.dtype)
str_lists = []
res = []
for i, index in enumerate(indices):
number_lists[index[0], index[1]] = values[i]
for number_list in number_lists:
# Translate from ord() values into characters
str_lists.append([self.int_to_char(val) for val in number_list])
for str_list in str_lists:
# int_to_char() returns '\x00' for an input == 1, which is the default
# value in number_lists, so we skip it when building the result
res.append(''.join(c for c in str_list if c != '\x00'))
return res |
@eldon, according to your solution, 'test_load_saved_model' has an error. So, we have two C.ARCH.INPUT_SIZE. interpolation=cv2.INTER_LINEAR
|
@eldon, do you know how to get string at placehoder? I made it with float, but I can't with string.
|
@eldon You may pull a request if you're willing to:) |
Ahh @kspook , I got an error at test_load_saved_model, but ignore the error (since tf-serving loads the model differently and expects different inputs/outputs). It works fine in practice. Another thing to consider is we noticed different results when leaving @kspook I haven't tried loading base64 encoded images directly. I think that would have to become part of the exported model for it to work in the request. For now, I use cv2 to read the image and send the array.aslist() to tf-serving. I'd like to get string input to work, but it'll likely be further in the future... @MaybeShewill-CV sure! I'm a bit busy this and next week but would be glad to. I'll see what I can come up with :) |
@eldon, thank you for you response. In case of base64, __C.ARCH.INPUT_SIZE = (None, 32) produces a compile error. So, it should not be 'None' (?). In addition, can you put characters at decodes_values? Currently, it shows just numbers.
|
@MaybeShewill-CV , can you make exported file to display character instead of index ?
|
@kspook don't forget to pass the output of the call to |
@eldon, I have an error (however, it works well at test_load_saved_model)
|
@MaybeShewill-CV , Thank you very much. Still I have two issue. One is major, another one is minor.
|
@kspook 1.I have not tested using http/rest api to deploy this model |
@MaybeShewill-CV , thank you. I had an error for test script. So it produced saved_model. python tools/test_crnn.py |
@kspook Are you sure you have the same config as I pushed here?:) |
@MaybeShewill-CV, I use python3.7. I had similar situation with Elon's. Did you used the same checkpoint in the README.md?
if I change 25--> 29, I could't even export. |
@kspook The python client will give you characters prediction. Have you ever updated the repo and tested it==! |
@MaybeShewill-CV Looks like you used the snippets I posted above and committed to your repo, so I no longer need to do a PR? |
@MaybeShewill-CV actually since a lot of the new code comes from my repo, would you mind crediting authorship? Thanks, |
@eldon You may pull your request and I will remove the same code form my part:) |
@eldon Or how do you want me to credit the authorship please let me know. I'd love to credit your authorship:) |
My question is that I get the character in the saved_model without python client. In addition, With __C.ARCH.INPUT_SIZE = (None, 32), I can run python client.
|
@kspook Don't get your point:) |
I mean I have an compile(?) error without change __C.ARCH.INPUT_SIZE = (100, 32) to __C.ARCH.INPUT_SIZE = (None, 32)
|
@kspook Compile error? Error details and which script do you use and how you use it:) |
I made comment a lot. ^^ @MaybeShewill-CV , thank you. I had an error for test script. So it produced saved_model. Yet I had an error for test script. python tools/test_crnn.py { "error": "Expected len(indices) == values.shape[0], but saw: 25 vs. 29\n\t [[{{node shadow_net/sequence_rnn_module/stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3}}]]" } |
@kspook You mean you met the error when you use heep/rest api when test python client?
|
@MaybeShewill-CV , no. ^^. i had an error when i made export file including test_load_saved_model. |
@kspook ==! Could you please specify your question again including the script you use and how you use it. I really have no idea what're talking about:) |
@MaybeShewill-CV , sorry I was confused.
|
@kspook The error occured when you use tools/export_saved_model.py ? |
@MaybeShewill-CV, no or yes. |
@kspook If you change the CFG.ARCH.INPUT_SIZE into [None, 32] you should change the sequence length in beam search function line 80 in tools/export_saved_model.py dynamically which is not supported here for now. I think you may need to make your own saved model according to your unique demand:) |
@MaybeShewill-CV ok, I'll make a PR today! I'll also include the docker runfiles I used. For some reason, I'm having trouble with the new model weights too, but it works well with the old weights (the ones still in my repo). This could also be a bug on my end... Happy to move that to another ticket. Cheers |
@eldon Thanks a lot. I find out that you modified the tools/export_saved_model.py and write a new script named tfserve/export_saved_model.py at the mean time. What's the difference between them? Is it necessary to keep them both? |
@eldon Seems like the exported saved model have conflicts. Since I prepare to merge your commit to implement the tensorflow serving function I will remove the old exported saved model of mine and use yours instead. Could you please pull a new request when we finished discussing how to merge the conflicts? |
@eldon I have manually merged the conflicts. You may pull and update the code and see if there's something differs from your wish:) |
@eldon You may pull a new request to share your docker file with us if you are willing to. Please let me know if you're confused with the newly updates:) |
Hi @MaybeShewill-CV thanks for the notes! Let me respond to the questions: The tools/export_saved_model.py exports the saved model with the output as the sparse tensor (it is the original saved model exporter, in case you don't want to use tf-serving. tfserve/export_saved_model.py breaks apart the output sparse tensor in case you want to use it with the tfserve client. It's your choice whether or not to keep the original under tools, I just left if there in case. Thanks for helping with the merge! Really appreciate it. I'll merge on my end and create another PR. I don't end up using a dockerfile, but rather the run command which uses google's tfserving image on dockerhub. Cheers, |
Btw, thanks @MaybeShewill-CV for the shout-out in the readme! :) |
Ah, this file should take care of it, so let's close the PR? Let me know. https://github.com/MaybeShewill-CV/CRNN_Tensorflow/blob/master/tfserve/run_tfserve_crnn_gpu.sh |
@eldon I have closed that PR. You're welcome to pull new request:) |
Ok, I think we can leave it as is, it's the same as what I use. :) Cheers |
@eldon I will close this issue. Welcome to raise new one:) |
@MaybeShewill-CV thank you for your code.
I have error. { "error": "Serving signature name: "serving_default" not found in signature def" }
How can I solve?
----- script -----------
The text was updated successfully, but these errors were encountered: