Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting Model to TFLite using Tensorflow 2.16.1 fails - works w/ 2.15.0 #63987

Closed
gsexton opened this issue Mar 19, 2024 · 14 comments
Closed
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting response Status - Awaiting response from author TF 2.16 TFLiteConverter For issues related to TFLite converter type:bug Bug

Comments

@gsexton
Copy link

gsexton commented Mar 19, 2024

1. System information

  • OpenSUSE Leap 15.5 AMD64 - Python 3.11
  • MacOS 14.4 Sonoma - Python 3.12
  • TensorFlow installation: pip package
  • TensorFlow library: 2.16.1

2. Code

#converter = tf.lite.TFLiteConverter.from_saved_model(save_path)

converter = tf.lite.TFLiteConverter.from_keras_model(best_model)
# This is supposed to work. I copied it from the tf website, but
# it blows up.
#
print(f"Calling converter.convert()")
tflite_model = converter.convert()
print("Writing output")
with open(os.path.join(output_directory, 'model.tflite'), 'wb') as f:
    f.write(tflite_model)

3. Failure after conversion

WARNING: All log messages before absl::InitializeLog() is called are written to STDERRW0000 00:00:1710168763.106515   29073 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.W0000 00:00:1710168763.107037   29073 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
2024-03-11 08:52:43.107570: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: tftest/trained_model
{{2024-03-11 08:52:43.107869: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }}}
2024-03-11 08:52:43.107881: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: tftest/trained_model2024-03-11 08:52:43.110991: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-03-11 08:52:43.111327: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.2024-03-11 08:52:43.134688: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: tftest/trained_model
2024-03-11 08:52:43.138356: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 30788 microseconds.
2024-03-11 08:52:43.158539: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(fused["ReadVariableOp:", "sequential_1/dense_1/add/ReadVariableOp@__inference_serving_default_124025"]): 
error: missing attribute 'value'LLVM ERROR: Failed to infer result type(s).Abort trap: 6

5. (optional) Any other info / logs

This issue was reported on #62610 originally.

I've tested this on Linux AMD 64, and on Mac OS 14.4 ARM and the same error occurs.

Per the comment from @adamantivm, I tried using Tensorflow 2.15.0 on Linux AMD64, and it worked as expected.

I tried setting the env variable for MLIR_CRASH... and running, but no output was found.

@gsexton gsexton added the TFLiteConverter For issues related to TFLite converter label Mar 19, 2024
@Aloqeely
Copy link
Contributor

It seems like you might have a mix of keras and tf.keras imports in your code, could you please check if your best_model is currently built using tf.keras or not?

@tilakrayal tilakrayal added comp:lite TF Lite related issues TF 2.16 type:bug Bug labels Mar 20, 2024
@tilakrayal
Copy link
Contributor

@gsexton,
Could you please share a simple standalone code to reproduce the issue? Also, please try to execute the model code with the tf-keras, as the tensorflow v2.16 by default uses keras 3.0 version. Thank you!

@tilakrayal tilakrayal added the stat:awaiting response Status - Awaiting response from author label Mar 20, 2024
@gsexton
Copy link
Author

gsexton commented Mar 20, 2024

@tilakrayal The model is built via:

    model = tf.keras.Sequential([
        tf.keras.layers.Dense(1024, activation='relu'),
        tf.keras.layers.Dense(2048, activation='relu'),
        tf.keras.layers.Dense(2048, activation='relu'),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ])

    model.compile(
        loss=tf.keras.losses.binary_crossentropy,
        optimizer=tf.keras.optimizers.Adam(learning_rate=0.003),
        metrics=[
            tf.keras.metrics.BinaryAccuracy(name='accuracy'),
            tf.keras.metrics.Precision(name='precision'),
            tf.keras.metrics.Recall(name='recall')
        ]
        )

    history = model.fit(X_train, y_train, epochs=NUM_TRAINING_EPOCHS)

the convert is:

converter = tf.lite.TFLiteConverter.from_keras_model(best_model)

so it seems like this should be OK.

My first effort for conversion was:

tf.saved_model.save(best_model,save_path)

converter = tf.lite.TFLiteConverter.from_saved_model(save_path)
tflite_model = converter.convert()

That too gives:

loc(fused["ReadVariableOp:", "sequential_1_1/dense_4_1/Add/ReadVariableOp@__inference_serving_default_133793"]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6

@Aloqeely I'm not sure I follow what to do with this:

Also, please try to execute the model code with the tf-keras, as the tensorflow v2.16 by default uses keras 3.0 version. Thank you!

I think it is already doing what you suggest. Using tf.keras.

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Mar 20, 2024
@gsexton
Copy link
Author

gsexton commented Mar 20, 2024

Here is a complete simple program and data set that replicates the problem. You can alternate paths by
commenting/uncommenting lines 48-50 of model.

support.csv
support_labels.csv
model.txt

@LakshmiKalaKadali
Copy link
Contributor

Hi @gsexton,

I have reproduced the code with TF 2.15 and TF 2.16. With 2.15 it's working fine as expected. As you mentioned, it's getting aborted with TF2.16.1 at TFLite conversion . Right now, TF2.16 has an issue with Keras 3.0. As a workaround install Keras 2 as follows. pip install -U tf_keras # Keras 2
import os
os.environ["TF_USE_LEGACY_KERAS"] = "1" . Here is the gist with TF 2.16 and keras2 workaround.

Thank You

@LakshmiKalaKadali LakshmiKalaKadali added the stat:awaiting response Status - Awaiting response from author label Mar 21, 2024
@gsexton
Copy link
Author

gsexton commented Mar 21, 2024

I have tested with the tf_keras installed, and the problem is resolved.

@gsexton gsexton closed this as completed Mar 21, 2024
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@Wheest
Copy link

Wheest commented Apr 6, 2024

I also experienced this bug, and it did not disappear with tf_keras installed. I had tensorflow==2.16.1, with the below code. I was only able to get it working with tensorflow==2.15

#!/usr/bin/env python3

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, DepthwiseConv2D, Conv2D, Flatten, Dense

# Define a simple model with depthwise convolutions
model = Sequential()
model.add(Input(shape=(64, 64, 3)))
model.add(DepthwiseConv2D(kernel_size=(3, 3), activation="relu"))
model.add(Conv2D(64, (1, 1), activation="relu"))
model.add(Flatten())
model.add(Dense(100, activation="relu"))
model.add(Dense(10, activation="softmax"))

# Compile the model
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])


# Define the representative dataset generator
def representative_dataset_gen():
    for _ in range(100):
        # Here you should provide a sample from your actual dataset
        # For illustration, we'll use random data
        # Yield a batch of input data (in this case, a single sample)
        yield [tf.random.normal((1, 64, 64, 3))]


# Set the TensorFlow Lite converter
converter = tf.lite.TFLiteConverter.from_keras_model(model)

# Set the optimization to default for int8 quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]

# Define the representative dataset for quantization
converter.representative_dataset = representative_dataset_gen

# Restrict the target spec to int8 for full integer quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

# Instruct the converter to make the input and output layer as integer
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8

# Convert the model
tflite_model = converter.convert()

# Save the model to a file
with open("model.tflite", "wb") as f:
    f.write(tflite_model)

print("Model has been successfully converted to TFLite and saved as 'model.tflite'.")

This crashes with the error:

/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tensorflow/lite/python/convert.py:964: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway.                                               
  warnings.warn(                                                                                                                                                                                                                               
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR                                                                                                                                                         
W0000 00:00:1712390503.405253   14212 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.                                                                                                                                                 
W0000 00:00:1712390503.405271   14212 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.                                                                                                                                       
2024-04-06 10:01:43.405735: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: /tmp/tmpvw6vrp68                                                                                                                                
2024-04-06 10:01:43.406193: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }                                                                                                                                   
2024-04-06 10:01:43.406204: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: /tmp/tmpvw6vrp68                                                                                                       
2024-04-06 10:01:43.409542: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled                                                                                                           
2024-04-06 10:01:43.410044: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.                                                                                                                                            
2024-04-06 10:01:43.469106: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: /tmp/tmpvw6vrp68                                                                                                
2024-04-06 10:01:43.481939: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 76207 microseconds.                                                                                       
2024-04-06 10:01:43.496698: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.                                                       
loc(fused["ReadVariableOp:", callsite("sequential_1/conv2d_1/Reshape/ReadVariableOp@__inference_serving_default_132"("/home/wheest/Dropbox/home/proj/n64-dnn/tflite_model_gen/weenet.py":49:1) at callsite("/home/wheest/.virtualenvs/n64/lib/python3
.11/site-packages/tensorflow/lite/python/lite.py":1175:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1129:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tenso
rflow/lite/python/lite.py":1636:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1614:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tensorflow/lite/python/conve
rt_phase.py":205:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1537:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/backend/tensorflow/layer.py":58:1
 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/backend/tensorflow/layer.py":120:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("
/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/layers/layer.py":814:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/home/wheest/.virtualenvs/n64
/lib/python3.11/site-packages/keras/src/ops/operation.py":48:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":156:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packa
ges/keras/src/models/sequential.py":202:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/models/functional.py":194:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/ops/functio
n.py":151:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/models/functional.py":578:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsi
te("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/layers/layer.py":814:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/home/wheest/.virtualenvs
/n64/lib/python3.11/site-packages/keras/src/ops/operation.py":48:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":156:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-p
ackages/keras/src/layers/convolutional/base_conv.py":233:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/ops/numpy.py":4507:1 at callsite("/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/b
ackend/tensorflow/numpy.py":1545:1 at "/home/wheest/.virtualenvs/n64/lib/python3.11/site-packages/keras/src/backend/tensorflow/core.py":64:1))))))))))))))))))))))))))]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).                                                                                                                                                                                                    
Aborted (core dumped)   

@john-at-lovelace
Copy link

Is there any chance we can fix this issue at the root cause? We found the same problem when saving a keras model with tf.saved_model.save(model) and then converting the SavedModel to TFLite.

@ilmari99
Copy link

I have the same problem; In TF2.16 converting a .keras model to .tflite gives LLVM ERROR: Failed to infer result type(s)..
Loading the TF 2.16 model with version 2.15 (using load_model) also doesn't work.

@kventinel
Copy link

Same issue.

@abnerh69
Copy link

Same issue here. Try:

pip install -U tf_keras # Keras 2
import os
os.environ["TF_USE_LEGACY_KERAS"] = "1"

but didn't work.

Saving to tflite...
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1716173986.478554 526596 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1716173986.479258 526596 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
2024-05-19 21:59:46.482609: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: models_saved/EURUSD-OTC-L
2024-05-19 21:59:46.483692: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-05-19 21:59:46.483699: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: models_saved/EURUSD-OTC-L
2024-05-19 21:59:46.491139: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-05-19 21:59:46.492726: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2024-05-19 21:59:46.520993: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: models_saved/EURUSD-OTC-L
2024-05-19 21:59:46.526127: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 43529 microseconds.
2024-05-19 21:59:46.560649: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var MLIR_CRASH_REPRODUCER_DIRECTORY to enable.
loc(fused["ReadVariableOp:", "sequential_2_1/conv1d_2_1/Reshape/ReadVariableOp@__inference_serving_default_601338"]): error: missing attribute 'value'
LLVM ERROR: Failed to infer result type(s).

@filipang
Copy link

Same issue here. Try:

pip install -U tf_keras # Keras 2 import os os.environ["TF_USE_LEGACY_KERAS"] = "1"

but didn't work.

Saving to tflite... WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1716173986.478554 526596 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1716173986.479258 526596 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. 2024-05-19 21:59:46.482609: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: models_saved/EURUSD-OTC-L 2024-05-19 21:59:46.483692: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve } 2024-05-19 21:59:46.483699: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: models_saved/EURUSD-OTC-L 2024-05-19 21:59:46.491139: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled 2024-05-19 21:59:46.492726: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle. 2024-05-19 21:59:46.520993: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: models_saved/EURUSD-OTC-L 2024-05-19 21:59:46.526127: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 43529 microseconds. 2024-05-19 21:59:46.560649: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var MLIR_CRASH_REPRODUCER_DIRECTORY to enable. loc(fused["ReadVariableOp:", "sequential_2_1/conv1d_2_1/Reshape/ReadVariableOp@__inference_serving_default_601338"]): error: missing attribute 'value' LLVM ERROR: Failed to infer result type(s).

Have you made sure to set the env variable before importing the tflow module? this fixed it for me

@stabiloSlin
Copy link

stabiloSlin commented Jun 7, 2024

W0000 00:00:1717755679.073267 6961451 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1717755679.074262 6961451 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
2024-06-07 12:21:19.076353: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: saved_model/my_model_two.keras
2024-06-07 12:21:19.091621: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-06-07 12:21:19.091641: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: saved_model/my_model_two.keras
2024-06-07 12:21:19.386137: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2024-06-07 12:21:20.424521: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: saved_model/my_model_two.keras
2024-06-07 12:21:20.694508: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK.

ConverterError                            Traceback (most recent call last)
Cell In[23], line 3
      1 # Convert the SavedModel to TensorFlow Lite model
      2 converter = tf.lite.TFLiteConverter.from_saved_model("saved_model/my_model_two.keras")
----> 3 tflite_model = converter.convert()

Do you have an example how to do it? @filipang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting response Status - Awaiting response from author TF 2.16 TFLiteConverter For issues related to TFLite converter type:bug Bug
Projects
None yet
Development

No branches or pull requests