Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to export frozen graph #17

Closed
ThiagoMateo opened this issue May 31, 2019 · 7 comments
Closed

how to export frozen graph #17

ThiagoMateo opened this issue May 31, 2019 · 7 comments

Comments

@ThiagoMateo
Copy link

hello @zzh8829. Thank for your code!
How to convert your checkpoint to frozen graph (.pb file)

@ThiagoMateo
Copy link
Author

i see this function in utils.py file:

def freeze_all(model, frozen=True):
    model.trainable = not frozen
    if isinstance(model, tf.keras.Model):
        for l in model.layers:
            freeze_all(l, frozen)

is this code for frozen model? and how to freeze model?

@zzh8829
Copy link
Owner

zzh8829 commented May 31, 2019

The freeze_all function is for transfer learning in the training process.
I haven't seen any mention of frozen graph in tensorflow 2.0, so it's probably deprecated now

the official guide recommends the SavedModel format
https://www.tensorflow.org/alpha/guide/saved_model
which is implemented in export_tfserving.py

SavedModel is in .pb format but it comes with extra files for the variables and parameters

@ThiagoMateo
Copy link
Author

hello @zzh8829 thank for your help. i try to deploy serving model using tensorrt inference server. but i got this error:

2019-06-04 02:58:55.785173: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'CombinedNonMaxSuppression' in bi
nary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) 
`tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-06-04 02:58:55.806209: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'CombinedNonMax
Suppression' in binary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib,
 accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

So, Do you know why?

@andydion
Copy link

andydion commented Jun 4, 2019

hello @zzh8829 thank for your help. i try to deploy serving model using tensorrt inference server. but i got this error:

2019-06-04 02:58:55.785173: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'CombinedNonMaxSuppression' in bi
nary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) 
`tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-06-04 02:58:55.806209: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'CombinedNonMax
Suppression' in binary running on 1984ec4fe5aa. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib,
 accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

So, Do you know why?

That might be related to outdated tensorflow/serving,

try with newer version and see if that works:
docker pull tensorflow/serving:nightly-gpu

@ThiagoMateo
Copy link
Author

thank @andydion. but i use tensorrt inference server instead of tf serving.
so i don't know how to fix this

@ThiagoMateo
Copy link
Author

ThiagoMateo commented Jun 9, 2019

Hello @andydion i write a client code, but it doesn't work.
So, I need a help.

import time
from absl import app, flags, logging
from absl.flags import FLAGS
import cv2
import numpy as np
import tensorflow as tf
from grpc.beta import implementations
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2

def main(_argv):
    host = '0.0.0.0'
    port = '8671'
    tf_serving = '0.0.0.0:9000'

    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'reader'
    request.model_spec.signature_name = 'serving_default'

    img = cv2.imread("1.jpg")
    img = cv2.resize(img, (416, 416))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = img.astype('float32') 

    tensor = tf.contrib.util.make_tensor_proto(img, shape=[1]+list(img.shape))
    request.inputs['input_1'].CopyFrom(tensor)
    resp = stub.Predict(request, 30.0)
    print("resp: ", resp)


if __name__ == '__main__':
    try:
        app.run(main)
    except SystemExit:
        pass

@zzh8829
Copy link
Owner

zzh8829 commented Dec 21, 2019

Hi I think this is a issue with tensorrt compatibility, i recommend using tensorflow serving instead

@zzh8829 zzh8829 closed this as completed Dec 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants