Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{ "error": "Serving signature name: \"serving_default\" not found in signature def" } #275

Closed
kspook opened this issue Jun 11, 2019 · 60 comments

Comments

@kspook
Copy link

kspook commented Jun 11, 2019

@MaybeShewill-CV thank you for your code.

I have error. { "error": "Serving signature name: "serving_default" not found in signature def" }
How can I solve?

----- script -----------

curl -X POST   http://localhost:9001/v1/models/crnn:predict   -H 'cache-control: no-cache'   -H 'content-type: application/json'   -d '{
  "inputs":
    {
    "input": { "b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAwICQoJBwwKCQoNDAwOER0TERAQESMZGxUdKiUsKyklKCguNEI4LjE/MigoOk46P0RHSktKLTdRV1FIVkJJSkf/2wBDAQwNDREPESITEyJHMCgwR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0f/wAARCAAfAHQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDZs9FF1awSfa0jmud/kxFCQ23rlh0qKHRrqeyjuYnhbzd2yLfh2wcHAPXpV+0hN9p2nLBdrbm283zpA4Biycg4yDzVzSDCLDS3YO1yiztAgICucnIJ/lQBz0GmXtxbfaILd5I84yuCfy61AYJhCJjDIIj0cqdp/Gtki3XwvbG4FxuLyNGYsYDdBuzVzTpWRNJgZj5DQzNKh+6w56jvQBytFdHJcSifRLaTY6tEgYOit8rMB3HoBVeX7Pc6tqEL20KLDDKsQiXaMqSQxx34oAxaK3bzSbWBGdgyrFZ5dkbIM27b37Z/lTDo9r9iGJphdfY/tZ4Gzb6euaAMWjNak+i+Xbeat2hZFjM6MpXy9+Mc856/pUV5pE9r5I863macr5axvljnocHHHvQBnk0tTmwu9zKtvIzLIYiFG75gCSOOvANRtFJGDvjdcMVO4EYI6j60AMpaKKACiiigBKVWKkMpII5BFJRQBZh1C9gAWK7mRR0UOcflUiavfpafZVnxDtKbdi9D1GcZqlRQBen1WaWa0laKEPa42FVxkDBAPPTjtjqaZaagYdUN7LGJN5csgOM7gc9c+tU2pKANa51g3Gjm1YMJnmLuwHyspJbH/fRqee/tPskk0VwxmktltlgKH92BjPzdD0P51hUUAdFrOo211ZXJsniUvMivxhpEC8Hn0Oeg9KbsS48W20MEqvFD5YRlOQQig9voawKMGgDrradZPsd1DbNHFJJPdSqGL4KqVzn3J6VHpSRzWNnG8jCbzftsjOeuH29/auainmgz5M0kZIwdrEcenFPS7uI87ZW5jMRyc/Ie3PagDX1KWQWdpAksUjzR+Y8XlbnLSEtwSDjr65qa6j017i1hlgSKF5cxzxptVosY2sc53bsA56e1ZX9q3JlhkcQs8TKysYlB46DIA4qeLWnVwr20X2fyni8qMleHOWIPJzmgBuoR2Ec6xtBLbSquJYkYsFbJ7t14weOOaKq6hcm9u2m8vYCAoXduwAMDJ7/WigD/2Q==" }
  }
}'
{ "error": "Serving signature name: \"serving_default\" not found in signature def" }
@MaybeShewill-CV
Copy link
Owner

@kspook How do you test the saved model:)

@kspook
Copy link
Author

kspook commented Jun 11, 2019

tensorflow_model_server --port=9000 --rest_api_port=9001 --model_name=crnn --model_base_path=/home/kspook/CRNN_Tensorflow/model/crnn_syn90k_saved_model

curl -X POST http://localhost:9001/v1/models/crnn:predict -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{
"inputs":
...

@MaybeShewill-CV
Copy link
Owner

@kspook I think you're supposed to modified the code from tools/export_saved_model.py to fix this:)

@kspook
Copy link
Author

kspook commented Jun 11, 2019

I found it with saved_model_cli. ^^
'outputs'

Thanks, anyway ^^

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['outputs']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 32, 100, 3)
        name: input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['prediction'] tensor_info:
        dtype: DT_INT64
        shape: (-1, -1)
        name: 
  Method name is: tensorflow/serving/predict

@kspook kspook closed this as completed Jun 11, 2019
@kspook kspook reopened this Jun 12, 2019
@kspook
Copy link
Author

kspook commented Jun 12, 2019

@MaybeShewill-CV , so you never tried to run tensorflow serving for this model?

I want to try, but placerholder must be string type , but model input_tensor is float32.
I can't convert to that.

I also asked this issue at tensorflow serving. ( tensorflow/serving#994 (comment) )

In case of flask, input_tensor (float32) works well. Do you know about this?

@MaybeShewill-CV
Copy link
Owner

@kspook The saved model was not used in tensorflow serving in my local environment==!

@kspook
Copy link
Author

kspook commented Jun 13, 2019

I am trying to make tensorflow serving, but I have error, because 'saved_model' doesn't have name for output.
tensorflow/serving#1100

But, I can't change the code. T.T can you help me?

  1. error message.

python tools/test_crnn.py
{ "error": "Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph" }

  1. saved_model_cli show --dir model/crnn_syn90k_saved_model/1/ --all
    MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
    signature_def['outputs']:
    The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
    dtype: DT_FLOAT
    shape: (1, 32, 100, 3)
    name: input_tensor:0
    The given SavedModel SignatureDef contains the following output(s):
    outputs['prediction'] tensor_info:
    dtype: DT_INT64
    shape: (-1, -1)
    name:
    Method name is: tensorflow/serving/predict

3.test script.

    import cv2
    import numpy as np
    import os
    import base64
    import json
    import requests
    import tensorflow as tf

#image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg"
image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
#image = np.array(image, np.float32) / 127.5 - 1.0
#image = np.expand_dims(image, 0)

image = image.tolist()

URL="http://localhost:9001/v1/models/crnn:predict"

headers = {"content-type": "application/json"}
body={
"signature_name": "outputs",
"inputs": [
image
]
}
r= requests.post(URL, data=json.dumps(body), headers = headers)
print(r.text)

@MaybeShewill-CV
Copy link
Owner

@kspook I have used tensorflow saved model for tensorflow serving here https://github.com/MaybeShewill-CV/nsfw-classification-tensorflow you may check if you can get something useful. I will see this problem too once I got spare time:)

@kspook
Copy link
Author

kspook commented Jun 13, 2019

Thank you,

This nsfw-classification-tensorflow model has a name for output['prediction'] tensor_info

In decodes, _= tf.nn.ctc_beam_search_decoder() in CRNN,
'decodes' is a list of length top_paths, where decodes[j] is a SparseTensor containing the decoded outputs. So, once I can add a name decodes[0], I can solve.

Some posting suggested to create to builde the TensorProto for the Sparse Tensor separately. However, I can't get the evidence of correct answer.

  1. https://stackoverflow.com/questions/52373193/tensor-not-found-with-empty-name-when-serving-the-model.
  2. [Feature Request]:Assign the name to SaprseTensor when build_tensor_info of it tensorflow/tensorflow#22396 (comment)
  1. details for model , nsfw-classification-tensorflow
    $ saved_model_cli show --dir ./1/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['classify_result']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 224, 224, 3)
        name: input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['prediction'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 5)
        name: nsfw_cls_model/final_prediction:0
  Method name is: tensorflow/serving/classify
  1. separate proto
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['outputs']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 32, 100, 3)
        name: input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_shape'] tensor_info:
        dtype: DT_INT64
        shape: (2)
        name: CTCBeamSearchDecoder:2
    outputs['indices'] tensor_info:
        dtype: DT_INT64
        shape: (-1, 2)
        name: CTCBeamSearchDecoder:0
    outputs['values'] tensor_info:
        dtype: DT_INT64
        shape: (-1)
        name: CTCBeamSearchDecoder:1
  Method name is: tensorflow/serving/predict

@eldon
Copy link

eldon commented Jun 13, 2019

I was able to get this model to partially work with tf-serving, by decomposing the decodes[0] SparseTensor into indices, values, and dense_shape. This snippet explicitly names the output of the individual parts of the decodes[0] tensor (see tensorflow/tensorflow#22396). I then modify data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str to accept these inputs separately.

    # from export_saved_model.py#85
    indices_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].indices)
    values_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].values)
    dense_shape_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].dense_shape)

    [...]

        # from export_saved_model.py#111
        # build SignatureDef protobuf
        signatur_def = sm.signature_def_utils.build_signature_def(
            inputs={'input_tensor': saved_input_tensor},
            outputs = {
                'decodes_indices':indices_output_tensor_info,
                'decodes_values':values_output_tensor_info,
                'decodes_dense_shape':dense_shape_output_tensor_info,
            },
            method_name=sm.signature_constants.PREDICT_METHOD_NAME,
        )

        # add graph into MetaGraphDef protobuf
        saved_builder.add_meta_graph_and_variables(
            sess,
            tags=[sm.tag_constants.SERVING],
            # line below adds 'serving_default' to signature def
            signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signatur_def},
        )

This then works on some of my example images, but seems to segfault on others without a stack trace. I think the problem has to do with the dynamic sizing of the bidirectional dynamic RNN (could there be a fixed batch size somewhere?). Happy to turn this into its own ticket for discussion. Cheers

@eldon
Copy link

eldon commented Jun 14, 2019

Btw, since someone might run into the same segfault issue, I solved it. Turns out using tf-serving out of the box with this model will cause segfaults on images with a width over 100. Changing the 100 to None in line 32 of config/global_config.py seems to stop the segfaulting:

__C.ARCH.INPUT_SIZE = (None, 32)  # synth90k dataset

@kspook
Copy link
Author

kspook commented Jun 14, 2019

@eldon , How did you change data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str?

I was able to get this model to partially work with tf-serving, by decomposing the decodes[0] SparseTensor into indices, values, and dense_shape. This snippet explicitly names the output of the individual parts of the decodes[0] tensor (see tensorflow/tensorflow#22396). I then modify data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str to accept these inputs separately.

    # from export_saved_model.py#85
    indices_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].indices)
    values_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].values)
    dense_shape_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].dense_shape)

    [...]

        # from export_saved_model.py#111
        # build SignatureDef protobuf
        signatur_def = sm.signature_def_utils.build_signature_def(
            inputs={'input_tensor': saved_input_tensor},
            outputs = {
                'decodes_indices':indices_output_tensor_info,
                'decodes_values':values_output_tensor_info,
                'decodes_dense_shape':dense_shape_output_tensor_info,
            },
            method_name=sm.signature_constants.PREDICT_METHOD_NAME,
        )

        # add graph into MetaGraphDef protobuf
        saved_builder.add_meta_graph_and_variables(
            sess,
            tags=[sm.tag_constants.SERVING],
            # line below adds 'serving_default' to signature def
            signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signatur_def},
        )

This then works on some of my example images, but seems to segfault on others without a stack trace. I think the problem has to do with the dynamic sizing of the bidirectional dynamic RNN (could there be a fixed batch size somewhere?). Happy to turn this into its own ticket for discussion. Cheers

@eldon
Copy link

eldon commented Jun 14, 2019

@kspook I ended up adding my own function to tf_io_pipeline_fast_tools.py and calling that one from the client script from the outputs of the API call. All this does is expand the arguments to accept the tensor components directly. Cheers

    def unpack_sparse_tensor_to_str(self, indices, values, dense_shape):
        values = np.array([self._ord_map[str(tmp) + '_index'] for tmp in values])

        number_lists = np.ones(dense_shape, dtype=values.dtype)
        str_lists = []
        res = []
        for i, index in enumerate(indices):
            number_lists[index[0], index[1]] = values[i]
        for number_list in number_lists:
            # Translate from ord() values into characters
            str_lists.append([self.int_to_char(val) for val in number_list])
        for str_list in str_lists:
            # int_to_char() returns '\x00' for an input == 1, which is the default
            # value in number_lists, so we skip it when building the result
            res.append(''.join(c for c in str_list if c != '\x00'))
        return res

@kspook
Copy link
Author

kspook commented Jun 14, 2019

@eldon, according to your solution, 'test_load_saved_model' has an error. So, we have two C.ARCH.INPUT_SIZE.

interpolation=cv2.INTER_LINEAR
TypeError: an integer is required (got type NoneType)

Btw, since someone might run into the same segfault issue, I solved it. Turns out using tf-serving out of the box with this model will cause segfaults on images with a width over 100. Changing the 100 to None in line 32 of config/global_config.py seems to stop the segfaulting:

__C.ARCH.INPUT_SIZE = (None, 32)  # synth90k dataset

@kspook
Copy link
Author

kspook commented Jun 17, 2019

@eldon, do you know how to get string at placehoder? I made it with float, but I can't with string.
@MaybeShewill-CV, why don't you update export_saved_model.py with @eldon's? ^^

  1. changed code for export_saved_model.py
def preprocess_image(image_buffer):
    """Preprocess JPEG encoded bytes to 3D float Tensor."""

    image_size = tuple(CFG.ARCH.INPUT_SIZE)
    image = tf.image.decode_image(image_buffer, channels=3)
    image = tf.reshape (image, [1, image_size[1], image_size[0], 3])
    image = tf.image.convert_image_dtype(image, dtype=tf.float32)

    return image
    # build inference tensorflow graph
    image_size = tuple(CFG.ARCH.INPUT_SIZE)
    '''
    image_tensor = tf.placeholder(
        dtype=tf.float32,
        shape=[1, image_size[1], image_size[0], 3],
        name='input_tensor')
    '''
    raw_image =  tf.placeholder(tf.string,  name='tf_box')
    feature_configs = {
        'image/encoded': tf.FixedLenFeature(
            shape=[], dtype=tf.string),
    }
    tf_example = tf.parse_example(raw_image , feature_configs)
    jpegs = tf_example['image/encoded']
    image_string = tf.reshape(jpegs, shape=[])
    image_tensor = preprocess_image(image_string)
  1. test script
import cv2
import numpy as np
import os
import base64
import json
import requests
import tensorflow as tf
image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg"
'''
image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
'''
URL="http://localhost:9001/v1/models/crnn:predict"
#URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify"
headers = {"content-type": "application/json"}
image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8")
body={
     "signature_name": "serving_default",
     #"signature_name": "outputs",
     "inputs": [
              image_content
            ]
}
r= requests.post(URL, data=json.dumps(body), headers = headers)
print(r.text)

  1. error message
    $ python tools/test_crnn.py
    { "error": "JSON Value: \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAwICQoJBwwKCQoNDAwOER0TERAQESMZGxUdKiUsKyklKCguNEI4LjE/MigoOk46P0RHSktKLTdRV1FIVkJJSkf/2wBDAQwNDREPESITEyJHMCgwR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0f/wAARCAAfAHQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDZs9FF1awSfa0jmud/kxFCQ23rlh0qKHRrqeyjuYnhbzd2yLfh2wcHAPXpV+0hN9p2nLBdrbm283zpA4Biycg4yDzVzSDCLDS3YO1yiztAgICucnIJ/lQBz0GmXtxbfaILd5I84yuCfy61AYJhCJjDIIj0cqdp/Gtki3XwvbG4FxuLyNGYsYDdBuzVzTpWRNJgZj5DQzNKh+6w56jvQBytFdHJcSifRLaTY6tEgYOit8rMB3HoBVeX7Pc6tqEL20KLDDKsQiXaMqSQxx34oAxaK3bzSbWBGdgyrFZ5dkbIM27b37Z/lTDo9r9iGJphdfY/tZ4Gzb6euaAMWjNak+i+Xbeat2hZFjM6MpXy9+Mc856/pUV5pE9r5I863macr5axvljnocHHHvQBnk0tTmwu9zKtvIzLIYiFG75gCSOOvANRtFJGDvjdcMVO4EYI6j60AMpaKKACiiigBKVWKkMpII5BFJRQBZh1C9gAWK7mRR0UOcflUiavfpafZVnxDtKbdi9D1GcZqlRQBen1WaWa0laKEPa42FVxkDBAPPTjtjqaZaagYdUN7LGJN5csgOM7gc9c+tU2pKANa51g3Gjm1YMJnmLuwHyspJbH/fRqee/tPskk0VwxmktltlgKH92BjPzdD0P51hUUAdFrOo211ZXJsniUvMivxhpEC8Hn0Oeg9KbsS48W20MEqvFD5YRlOQQig9voawKMGgDrradZPsd1DbNHFJJPdSqGL4KqVzn3J6VHpSRzWNnG8jCbzftsjOeuH29/auainmgz5M0kZIwdrEcenFPS7uI87ZW5jMRyc/Ie3PagDX1KWQWdpAksUjzR+Y8XlbnLSEtwSDjr65qa6j017i1hlgSKF5cxzxptVosY2sc53bsA56e1ZX9q3JlhkcQs8TKysYlB46DIA4qeLWnVwr20X2fyni8qMleHOWIPJzmgBuoR2Ec6xtBLbSquJYkYsFbJ7t14weOOaKq6hcm9u2m8vYCAoXduwAMDJ7/WigD/2Q==\" Type: String is not of expected type: float" }

@MaybeShewill-CV
Copy link
Owner

@eldon You may pull a request if you're willing to:)

@eldon
Copy link

eldon commented Jun 17, 2019

Ahh @kspook , I got an error at test_load_saved_model, but ignore the error (since tf-serving loads the model differently and expects different inputs/outputs). It works fine in practice.

Another thing to consider is we noticed different results when leaving __C.ARCH.INPUT_SIZE = (100, 32) and resizing all input images to 100x32 versus allowing dynamic inputs. I'm not sure which of the results is "better" -- we haven't run benchmarks yet.

@kspook I haven't tried loading base64 encoded images directly. I think that would have to become part of the exported model for it to work in the request. For now, I use cv2 to read the image and send the array.aslist() to tf-serving. I'd like to get string input to work, but it'll likely be further in the future...

@MaybeShewill-CV sure! I'm a bit busy this and next week but would be glad to. I'll see what I can come up with :)

@kspook
Copy link
Author

kspook commented Jun 18, 2019

@eldon, thank you for you response. In case of base64, __C.ARCH.INPUT_SIZE = (None, 32) produces a compile error. So, it should not be 'None' (?).

In addition, can you put characters at decodes_values? Currently, it shows just numbers.


        "decodes_values": [
            14,
            33,
            20,
            8,
            26,
            31,
            4,
            34,
            33,
            31,
            19,
            26,
            29,
            20
        ],

@MaybeShewill-CV
Copy link
Owner

@eldon Thanks a lot:)

@kspook The model now can not well support dynamic image length since the sequence length hyperparameter is related to the input image width:)

@kspook
Copy link
Author

kspook commented Jun 18, 2019

@MaybeShewill-CV , can you make exported file to display character instead of index ?

@eldon, thank you for you response. In case of base64, __C.ARCH.INPUT_SIZE = (None, 32) produces a compile error. So, it should not be 'None' (?).
In addition, can you put characters at decodes_values? Currently, it shows just numbers.

    "decodes_values": [
        14,
        33,
        20,
        8,
        26,
        31,
        4,
        34,
        33,
        31,
        19,
        26,
        29,
        20
    ],

@eldon
Copy link

eldon commented Jun 18, 2019

@kspook don't forget to pass the output of the call to codec.unpack_sparse_tensor_to_str as I define above. :)

@kspook
Copy link
Author

kspook commented Jun 18, 2019

@eldon, I have an error (however, it works well at test_load_saved_model)

Traceback (most recent call last):
  File "tools/export_saved_model.py", line 232, in <module>
    build_saved_model(args.ckpt_path, args.export_dir, args.char_dict_path, args.ord_map_dict_path)
  File "tools/export_saved_model.py", line 158, in build_saved_model
    prediction_val = codec.unpack_sparse_tensor_to_str(indices_output_tensor_info, values_output_tensor_info, dense_shape_output_tensor_info)
  File "/home/kspook/CRNN_Tensorflow/data_provider/tf_io_pipline_fast_tools.py", line 242, in unpack_sparse_tensor_to_str
    values = np.array([self._ord_map[str(tmp) + '_index'] for tmp in values])
TypeError: 'TensorInfo' object is not iterable

@kspook don't forget to pass the output of the call to codec.unpack_sparse_tensor_to_str as I define above. :)

@MaybeShewill-CV
Copy link
Owner

@kspook I have updated the code and the saved model file. You may test the python client in docker file. You're welcome to give any useful suggestions here:)
@eldon Thanks for your valuable prompt on sparse tensor problem in tensorflow serving. Feel free to give any useful suggestions here:)

@kspook
Copy link
Author

kspook commented Jun 20, 2019

@MaybeShewill-CV , Thank you very much.

Still I have two issue. One is major, another one is minor.
Major is to run curl script with string input.
Minor one is to get result with character at 'SignatureDefs'
(To get character like test_load_saved_model)
^^

@kspook I have updated the code and the saved model file. You may test the python client in docker file. You're welcome to give any useful suggestions here:)
@eldon Thanks for your valuable prompt on sparse tensor problem in tensorflow serving. Feel free to give any useful suggestions here:)

@MaybeShewill-CV
Copy link
Owner

@kspook 1.I have not tested using http/rest api to deploy this model
2.The python client will give you the exact output words you want you may try it first:)

@kspook
Copy link
Author

kspook commented Jun 20, 2019

@MaybeShewill-CV , thank you. I had an error for test script. So it produced saved_model.
Yet I had an error for test script.

python tools/test_crnn.py
{ "error": "Expected len(indices) == values.shape[0], but saw: 25 vs. 29\n\t [[{{node shadow_net/sequence_rnn_module/stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3}}]]" }

@MaybeShewill-CV
Copy link
Owner

@kspook Are you sure you have the same config as I pushed here?:)

@kspook
Copy link
Author

kspook commented Jun 20, 2019

@MaybeShewill-CV, I use python3.7. I had similar situation with Elon's.
Once I download Elon's repository, it solved.

Did you used the same checkpoint in the README.md?

Evaluate the model on the synth90k dataset

In this repo you will find a model pre-trained on the Synth 90kdataset. When the tfrecords file of synth90k dataset has been successfully generated you may evaluated the model by the following script

The pretrained crnn model weights on Synth90k dataset can be found here

python tools/evaluate_shadownet.py --dataset_dir PATH/TO/YOUR/DATASET_DIR 
--weights_path PATH/TO/YOUR/MODEL_WEIGHTS_PATH
--char_dict_path PATH/TO/CHAR_DICT_PATH 
--ord_map_dict_path PATH/TO/ORD_MAP_PATH
--process_all 1 --visualize 1

if I change 25--> 29, I could't even export.
#__C.ARCH.SEQ_LENGTH = 29 # synth90k dataset
After successful exporting with 25, I had the same error.

@MaybeShewill-CV
Copy link
Owner

@kspook The python client will give you characters prediction. Have you ever updated the repo and tested it==!

@eldon
Copy link

eldon commented Jun 22, 2019

@MaybeShewill-CV Looks like you used the snippets I posted above and committed to your repo, so I no longer need to do a PR?

@eldon
Copy link

eldon commented Jun 22, 2019

@MaybeShewill-CV actually since a lot of the new code comes from my repo, would you mind crediting authorship? Thanks,

@MaybeShewill-CV
Copy link
Owner

@eldon You may pull your request and I will remove the same code form my part:)

@MaybeShewill-CV
Copy link
Owner

@eldon Or how do you want me to credit the authorship please let me know. I'd love to credit your authorship:)

@kspook
Copy link
Author

kspook commented Jun 23, 2019

My question is that I get the character in the saved_model without python client.

In addition, With __C.ARCH.INPUT_SIZE = (None, 32), I can run python client.
But it can run test_load_saved_model. Thus, I can't check character as a result, anyway.

@kspook The python client will give you characters prediction. Have you ever updated the repo and tested it==!

@MaybeShewill-CV
Copy link
Owner

@kspook Don't get your point:)

@kspook
Copy link
Author

kspook commented Jun 24, 2019

I mean I have an compile(?) error without change __C.ARCH.INPUT_SIZE = (100, 32) to __C.ARCH.INPUT_SIZE = (None, 32)

@kspook Don't get your point:)

@MaybeShewill-CV
Copy link
Owner

@kspook Compile error? Error details and which script do you use and how you use it:)

@kspook
Copy link
Author

kspook commented Jun 24, 2019

I made comment a lot. ^^

@MaybeShewill-CV , thank you. I had an error for test script. So it produced saved_model. Yet I had an error for test script. python tools/test_crnn.py { "error": "Expected len(indices) == values.shape[0], but saw: 25 vs. 29\n\t [[{{node shadow_net/sequence_rnn_module/stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3}}]]" }

@MaybeShewill-CV
Copy link
Owner

@kspook You mean you met the error when you use heep/rest api when test python client?

@eldon, do you know how to get string at placehoder? I made it with float, but I can't with string.
@MaybeShewill-CV, why don't you update export_saved_model.py with @eldon's? ^^

1. changed code for export_saved_model.py
def preprocess_image(image_buffer):
    """Preprocess JPEG encoded bytes to 3D float Tensor."""

    image_size = tuple(CFG.ARCH.INPUT_SIZE)
    image = tf.image.decode_image(image_buffer, channels=3)
    image = tf.reshape (image, [1, image_size[1], image_size[0], 3])
    image = tf.image.convert_image_dtype(image, dtype=tf.float32)

    return image
    # build inference tensorflow graph
    image_size = tuple(CFG.ARCH.INPUT_SIZE)
    '''
    image_tensor = tf.placeholder(
        dtype=tf.float32,
        shape=[1, image_size[1], image_size[0], 3],
        name='input_tensor')
    '''
    raw_image =  tf.placeholder(tf.string,  name='tf_box')
    feature_configs = {
        'image/encoded': tf.FixedLenFeature(
            shape=[], dtype=tf.string),
    }
    tf_example = tf.parse_example(raw_image , feature_configs)
    jpegs = tf_example['image/encoded']
    image_string = tf.reshape(jpegs, shape=[])
    image_tensor = preprocess_image(image_string)
1. test script
import cv2
import numpy as np
import os
import base64
import json
import requests
import tensorflow as tf
image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg"
'''
image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
'''
URL="http://localhost:9001/v1/models/crnn:predict"
#URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify"
headers = {"content-type": "application/json"}
image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8")
body={
     "signature_name": "serving_default",
     #"signature_name": "outputs",
     "inputs": [
              image_content
            ]
}
r= requests.post(URL, data=json.dumps(body), headers = headers)
print(r.text)
1. error message
   $ python tools/test_crnn.py
   `{ "error": "JSON Value: \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAwICQoJBwwKCQoNDAwOER0TERAQESMZGxUdKiUsKyklKCguNEI4LjE/MigoOk46P0RHSktKLTdRV1FIVkJJSkf/2wBDAQwNDREPESITEyJHMCgwR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0f/wAARCAAfAHQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDZs9FF1awSfa0jmud/kxFCQ23rlh0qKHRrqeyjuYnhbzd2yLfh2wcHAPXpV+0hN9p2nLBdrbm283zpA4Biycg4yDzVzSDCLDS3YO1yiztAgICucnIJ/lQBz0GmXtxbfaILd5I84yuCfy61AYJhCJjDIIj0cqdp/Gtki3XwvbG4FxuLyNGYsYDdBuzVzTpWRNJgZj5DQzNKh+6w56jvQBytFdHJcSifRLaTY6tEgYOit8rMB3HoBVeX7Pc6tqEL20KLDDKsQiXaMqSQxx34oAxaK3bzSbWBGdgyrFZ5dkbIM27b37Z/lTDo9r9iGJphdfY/tZ4Gzb6euaAMWjNak+i+Xbeat2hZFjM6MpXy9+Mc856/pUV5pE9r5I863macr5axvljnocHHHvQBnk0tTmwu9zKtvIzLIYiFG75gCSOOvANRtFJGDvjdcMVO4EYI6j60AMpaKKACiiigBKVWKkMpII5BFJRQBZh1C9gAWK7mRR0UOcflUiavfpafZVnxDtKbdi9D1GcZqlRQBen1WaWa0laKEPa42FVxkDBAPPTjtjqaZaagYdUN7LGJN5csgOM7gc9c+tU2pKANa51g3Gjm1YMJnmLuwHyspJbH/fRqee/tPskk0VwxmktltlgKH92BjPzdD0P51hUUAdFrOo211ZXJsniUvMivxhpEC8Hn0Oeg9KbsS48W20MEqvFD5YRlOQQig9voawKMGgDrradZPsd1DbNHFJJPdSqGL4KqVzn3J6VHpSRzWNnG8jCbzftsjOeuH29/auainmgz5M0kZIwdrEcenFPS7uI87ZW5jMRyc/Ie3PagDX1KWQWdpAksUjzR+Y8XlbnLSEtwSDjr65qa6j017i1hlgSKF5cxzxptVosY2sc53bsA56e1ZX9q3JlhkcQs8TKysYlB46DIA4qeLWnVwr20X2fyni8qMleHOWIPJzmgBuoR2Ec6xtBLbSquJYkYsFbJ7t14weOOaKq6hcm9u2m8vYCAoXduwAMDJ7/WigD/2Q==\" Type: String is not of expected type: float" }`

@kspook
Copy link
Author

kspook commented Jun 24, 2019

@MaybeShewill-CV , no. ^^. i had an error when i made export file including test_load_saved_model.

@MaybeShewill-CV
Copy link
Owner

@kspook ==! Could you please specify your question again including the script you use and how you use it. I really have no idea what're talking about:)

@kspook
Copy link
Author

kspook commented Jun 24, 2019

@MaybeShewill-CV , sorry I was confused.

  1. __C.ARCH.INPUT_SIZE = (100, 32) : compile works well.
  2. __C.ARCH.INPUT_SIZE = (None, 32)
    I0624 19:32:49.190137 139938353473344 builder_impl.py:636] No assets to save.
    I0624 19:32:49.190171 139938353473344 builder_impl.py:456] No assets to write.
    I0624 19:32:49.304992 139938353473344 builder_impl.py:421] SavedModel written to: model/crnn_syn90k_saved_model/1/saved_model.pb
    Traceback (most recent call last):
    File "tools/export_saved_model.py", line 223, in
    test_load_saved_model(args.export_dir, args.char_dict_path, args.ord_map_dict_path)
    File "tools/export_saved_model.py", line 152, in test_load_saved_model
    interpolation=cv2.INTER_LINEAR
    TypeError: an integer is required (got type NoneType)
    (crnntf3) kspook@kspook-ROG-MAXIMUS-XI-HERO:~/CRNN_Tensorflow$

@MaybeShewill-CV
Copy link
Owner

@kspook The error occured when you use tools/export_saved_model.py ?

@kspook
Copy link
Author

kspook commented Jun 24, 2019

@MaybeShewill-CV, no or yes.
Exactly I used 'export_crnn xxxx.sh'.

@MaybeShewill-CV
Copy link
Owner

@kspook If you change the CFG.ARCH.INPUT_SIZE into [None, 32] you should change the sequence length in beam search function line 80 in tools/export_saved_model.py dynamically which is not supported here for now. I think you may need to make your own saved model according to your unique demand:)

@eldon
Copy link

eldon commented Jun 24, 2019

@MaybeShewill-CV ok, I'll make a PR today! I'll also include the docker runfiles I used.

For some reason, I'm having trouble with the new model weights too, but it works well with the old weights (the ones still in my repo). This could also be a bug on my end... Happy to move that to another ticket. Cheers

@MaybeShewill-CV
Copy link
Owner

@eldon Thanks a lot. I find out that you modified the tools/export_saved_model.py and write a new script named tfserve/export_saved_model.py at the mean time. What's the difference between them? Is it necessary to keep them both?

@MaybeShewill-CV
Copy link
Owner

@eldon Seems like the exported saved model have conflicts. Since I prepare to merge your commit to implement the tensorflow serving function I will remove the old exported saved model of mine and use yours instead. Could you please pull a new request when we finished discussing how to merge the conflicts?

@MaybeShewill-CV
Copy link
Owner

@eldon I have manually merged the conflicts. You may pull and update the code and see if there's something differs from your wish:)

@MaybeShewill-CV
Copy link
Owner

@eldon You may pull a new request to share your docker file with us if you are willing to. Please let me know if you're confused with the newly updates:)

@eldon
Copy link

eldon commented Jun 26, 2019

Hi @MaybeShewill-CV thanks for the notes! Let me respond to the questions:

The tools/export_saved_model.py exports the saved model with the output as the sparse tensor (it is the original saved model exporter, in case you don't want to use tf-serving. tfserve/export_saved_model.py breaks apart the output sparse tensor in case you want to use it with the tfserve client. It's your choice whether or not to keep the original under tools, I just left if there in case.

Thanks for helping with the merge! Really appreciate it. I'll merge on my end and create another PR. I don't end up using a dockerfile, but rather the run command which uses google's tfserving image on dockerhub. Cheers,

@eldon
Copy link

eldon commented Jun 26, 2019

Btw, thanks @MaybeShewill-CV for the shout-out in the readme! :)

@eldon
Copy link

eldon commented Jun 26, 2019

Ah, this file should take care of it, so let's close the PR? Let me know.

https://github.com/MaybeShewill-CV/CRNN_Tensorflow/blob/master/tfserve/run_tfserve_crnn_gpu.sh

@MaybeShewill-CV
Copy link
Owner

@eldon I have closed that PR. You're welcome to pull new request:)

@eldon
Copy link

eldon commented Jun 28, 2019

Ok, I think we can leave it as is, it's the same as what I use. :) Cheers

@MaybeShewill-CV
Copy link
Owner

@eldon I will close this issue. Welcome to raise new one:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants