Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting LSTM with TFlite Converter yields model which makes bad predictions in contrast to keras model #55835

Closed
leeflix opened this issue May 2, 2022 · 6 comments
Assignees
Labels
stat:awaiting response Status - Awaiting response from author TF 2.8 TFLiteConverter For issues related to TFLite converter type:support Support issues

Comments

@leeflix
Copy link

leeflix commented May 2, 2022

Click to expand!

Issue Type

Bug

Source

source

Tensorflow Version

tf 2.8

Custom Code

No

OS Platform and Distribution

No response

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/Compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current Behaviour?

I want to export the model presented in this (https://keras.io/examples/vision/captcha_ocr/) keras tutorial in the tflite format. I found two ways to export the model:

1. Setting the flag tf.lite.OpsSet.SELECT_TF_OPS
2. Using the function get_concrete_function

Option 1 works in terms of correct predictions, but I would like to refrain from using the flag. With option 2 the export works but the predictions of the exported model are bad.

Standalone code to reproduce the issue

You can go through the keras notebook and add the following code for the two options.

Code for option 1:

model = prediction_model

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

with open('model.tflite', 'wb') as f:
    f.write(tflite_model)

----------------------------------------------------------------

Code for option 2:

model = prediction_model

run_model = tf.function(lambda x: model(x))
concrete_func = run_model.get_concrete_function(tf.TensorSpec([1] + model.inputs[0].shape[1:], model.inputs[0].dtype))

# model directory.
MODEL_DIR = "keras_lstm"
model.save(MODEL_DIR, save_format="tf", signatures=concrete_func)

converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)
tflite_model = converter.convert()

with open('flag_diopter_model.tflite', 'wb') as f:
    f.write(tflite_model)

Relevant log output

No response

@google-ml-butler google-ml-butler bot added the type:bug Bug label May 2, 2022
@mohantym mohantym assigned mohantym and unassigned tilakrayal May 3, 2022
@mohantym mohantym added TFLiteConverter For issues related to TFLite converter TF 2.8 type:support Support issues and removed type:bug Bug labels May 3, 2022
@mohantym
Copy link
Contributor

mohantym commented May 3, 2022

Hi @leeflix ! Could you let us know reason behind not using Select Ops(As it is giving already correct predictions right)?

@mohantym mohantym added the stat:awaiting response Status - Awaiting response from author label May 3, 2022
@leeflix
Copy link
Author

leeflix commented May 3, 2022

When using Select Ops I have to link way bigger binaries.

@mohantym mohantym removed the stat:awaiting response Status - Awaiting response from author label May 4, 2022
@mohantym mohantym assigned sachinprasadhs and unassigned mohantym May 4, 2022
@sachinprasadhs
Copy link
Contributor

You can reduce the binary size of your model using selective builds https://www.tensorflow.org/lite/guide/reduce_binary_size.
Selective builds skip unused operations in your model set and produce a compact library with just the runtime and the op kernels required for the model to run on your mobile device.

@sachinprasadhs sachinprasadhs added the stat:awaiting response Status - Awaiting response from author label May 5, 2022
@leeflix
Copy link
Author

leeflix commented May 5, 2022

Thanks for the suggestion. I will look into it, but can someone explain why option 2 produces a model that does bad predictions?

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label May 7, 2022
@sachinprasadhs
Copy link
Contributor

sachinprasadhs commented May 9, 2022

This could be due to some of the OPS which are not supported in TFLite runtime, so having selective OPS has advantage to fallback to TF OPS, you can see the more detail here.
You can check this comparison of binary file size when using different builds.

@sachinprasadhs sachinprasadhs added the stat:awaiting response Status - Awaiting response from author label May 9, 2022
@leeflix leeflix closed this as completed May 9, 2022
@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author TF 2.8 TFLiteConverter For issues related to TFLite converter type:support Support issues
Projects
None yet
Development

No branches or pull requests

5 participants