Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There are discrepancies between the outputs of the Ttflite and converted ONNX model. #2323

Open
SuhwanSong opened this issue Apr 15, 2024 · 0 comments
Labels
bug An unexpected problem or unintended behavior

Comments

@SuhwanSong
Copy link

SuhwanSong commented Apr 15, 2024

Describe the bug

When converting the TensorFlow Lite (TFLite) format into ONNX using the provided script and comparing the outputs of the TFLite model and ONNX model, discrepancies are observed between them.

Urgency

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): Linux Ubuntu 22.04
  • TensorFlow Version: 2.16.1
  • Python version: 3.10.12
  • ONNX version (if applicable, e.g. 1.11*): 1.16.0
  • ONNXRuntime version (if applicable, e.g. 1.11*): 1.16.3
  • tf2onnx version: 1.16.1

To Reproduce

poc link: https://compsec.snu.ac.kr/git/SuhwanSong/poc/-/raw/main/tf2onnx/poc_float32.tflite

  1. Download "poc_float32.tflite".
  2. Run the following code with poc file.
import tensorflow
import onnxruntime
import numpy as np

import tf2onnx
from einops import rearrange


if __name__ == "__main__" :


    tflite_model_path = 'poc_float32.tflite'
    output_onnx_path = './converted.onnx'

    # Convert TensorFlowLite into ONNX

    tf2onnx.convert.from_tflite(
        tflite_model_path,
        opset=18,
        output_path=output_onnx_path
    )

    # Prepare input for TensorFlow models
    input_np = np.random.randn(1, 3, 224, 224).astype('f')
    input_for_tf = rearrange(input_np, 'b c h w -> b h w c')

    # load and run onnx model
    ort_session = onnxruntime.InferenceSession(output_onnx_path)
    ort_output  = ort_session.run(None, {'x' : input_for_tf})


    # Load TensorFlow Lite model
    interpreter = tensorflow.lite.Interpreter(tflite_model_path)
    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()
    interpreter.allocate_tensors()

    # Run TensorFlow Lite model
    interpreter.set_tensor(input_details[0]['index'], input_for_tf)
    interpreter.invoke()
    tflite_output = interpreter.get_tensor(output_details[0]['index'])

    # Compare ONNX and TensorFlow Lite outputs
    if np.allclose(ort_output[0], tflite_output[0], rtol=1e-03, atol=1e-05):
        print("Test Passed: ONNX and TensorFlow Lite outputs match\n")
    else:
        print("Test Failed: ONNX and TensorFlow Lite outputs differ\n")

Screenshots

  • poc_float32.tflite and converted ONNX model
    pictures
@SuhwanSong SuhwanSong added the bug An unexpected problem or unintended behavior label Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

1 participant