Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem converting from saved model to tflite model #62610

Closed
NotPjoker05 opened this issue Dec 10, 2023 · 14 comments
Closed

Problem converting from saved model to tflite model #62610

NotPjoker05 opened this issue Dec 10, 2023 · 14 comments
Assignees
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TFLiteConverter For issues related to TFLite converter type:support Support issues

Comments

@NotPjoker05
Copy link

Hi, I'm trying to convert my model (saved in 'saved model' format) to a tflite model but I get an error, this is my code:

`converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
tflite_model = converter.convert()

with open('model.tflite', 'wb') as f:
f.write(tflite_model)`

The error is this:
loc(fused["ReadVariableOp:", "sequential_1/conv2d_1/ReadVariableOp@__inference_serving_default_285"]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).

I read the tensorflow page related to the topic and it explains that a refactoring of my model is probably necessary, so I tried to follow the indication but the error I get is the same (my other code is this:)

`
import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
`
I hope someone is able to help me, thanks in advance

@tilakrayal tilakrayal added comp:lite TF Lite related issues type:support Support issues TFLiteConverter For issues related to TFLite converter labels Dec 11, 2023
@LakshmiKalaKadali
Copy link
Contributor

Hi @NotPjoker05,

Could you please fill the template to resolve the issue.
As well try the code with tensorflow 2.15 version if not tried.

Thank You

@LakshmiKalaKadali LakshmiKalaKadali added the stat:awaiting response Status - Awaiting response from author label Dec 13, 2023
@NotPjoker05
Copy link
Author

I'm already using tensorflow 2.15.

1. System information

  • Windows 10:
  • Tensorflow installed with pycharm gui:
  • TensorFlow 2.15 :

2. Code

Here are my 3 files, Training for training my model, utils provides a list of methods used in training and test is the class where I use my model (predictions work in tensorflow but the method for converting from saved model to tflite doesn't work)

Training.txt
utils.txt
Test.txt

3. Failure after conversion

When I try to convert my model with the code in Test class I receive this error:
loc(fused["ReadVariableOp:", "sequential_1/conv2d_1/ReadVariableOp@__inference_serving_default_285"]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).

Please help me, I've been stuck on this error for weeks...

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Dec 13, 2023
@LakshmiKalaKadali
Copy link
Contributor

@NotPjoker05 ,
Please provide the input data to reproduce the error.
Thank You

@LakshmiKalaKadali LakshmiKalaKadali added the stat:awaiting response Status - Awaiting response from author label Dec 18, 2023
@NotPjoker05
Copy link
Author

Sure, this is my dataset:
https://mega.nz/file/UrVUEDIK#mHkxTofcMjTWiBMzbALDDbnh1CJkZgpsKTezd5zgXTc

Thank you very much!

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Dec 18, 2023
@LakshmiKalaKadali
Copy link
Contributor

Hi @NotPjoker05 ,

Sorry for the delay, I have executed the code in colab 2.15. It's working as expected. The saved model inference and tflite inference(both model.tflite and converted_model.tflite) are the same as expected. Please find the gist.

Thank You

@LakshmiKalaKadali LakshmiKalaKadali added the stat:awaiting response Status - Awaiting response from author label Dec 22, 2023
Copy link

This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Dec 30, 2023
Copy link

github-actions bot commented Jan 6, 2024

This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.

@github-actions github-actions bot closed this as completed Jan 6, 2024
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@BaeBae33
Copy link

Same issue. Colab crash without any error..

@adamantivm
Copy link

Just in case it helps others like me struggling with this issue. I realized I was using TensorFlow 2.16.1 , from docker image tensorflow:latest-gpu and that was the source of the problem.
I switched to docker image tensorflow:2.15.0-gpu and the problem went away, all working fine now.

@israelfelix10
Copy link

@adamantivm Thanks for helping with my project, good idea! Solved this problem.

@kventinel
Copy link

Same issue.

@jdsalmonson
Copy link

Same issue here. 2.16.1 crashed, 2.15 worked.

@fabrizio-indirli
Copy link

I have the same issue on TF 2.16.1 (instead it works on 2.15 as suggested) on my local machine, when running the following script that creates a very small NN in Keras and then converts it to TFlite:

import argparse
import keras
import tensorflow as tf

INPUT_SHAPE = (5, 5, 3)
CONV_FILTERS = 6
CONV_KER_SIZE = (2, 2)
FC_UNITS = 4

def build_model():
    model = keras.Sequential(
        [
            keras.Input(shape=INPUT_SHAPE, batch_size=1),
            keras.layers.Conv2D(CONV_FILTERS, CONV_KER_SIZE),
            keras.layers.Flatten(),
            keras.layers.Dense(FC_UNITS, activation='softmax'),
        ]
    )

    model.summary()
    return model

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--output", "-o", type=str, help="Output filename", default=None)
    parser.add_argument("--totflite", "-t", type=str, help="Output TFlite filename", default=None)
    args = parser.parse_args()
    model = build_model()

    if args.output:
        model.save(args.output)

    if args.totflite:
        converter = tf.lite.TFLiteConverter.from_keras_model(model)
        tflite_model = converter.convert()
        with open(args.totflite, 'wb') as f:
            f.write(tflite_model)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TFLiteConverter For issues related to TFLite converter type:support Support issues
Projects
None yet
Development

No branches or pull requests

9 participants