Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different results between TensorFlow model and ONNX model. #2096

Open
chielingyueh opened this issue Dec 16, 2022 · 2 comments
Open

Different results between TensorFlow model and ONNX model. #2096

chielingyueh opened this issue Dec 16, 2022 · 2 comments
Labels
bug An unexpected problem or unintended behavior

Comments

@chielingyueh
Copy link

Hi,

I converted a TensorFlow model to an ONNX model:

spec = (tf.TensorSpec((None, 256), tf.int32, name="input_ids"),)
tf2onnx.convert.from_keras(model, output_path='model_biomarker.onnx', input_signature=spec)

However, when I make an inference on the ONNX model, the output is different from what I get from the TensorFlow model.

Could anyone help me why there is a difference between the TensorFlow model and ONNX model output?

Thanks!

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): macOS Montery 12.3
  • TensorFlow Version: 2.4.1
  • Python version: 3.9.7
  • ONNX version (if applicable, e.g. 1.11*): 1.13.0
  • ONNXRuntime version (if applicable, e.g. 1.11*): 1.13.1
@chielingyueh chielingyueh added the bug An unexpected problem or unintended behavior label Dec 16, 2022
@cosineFish
Copy link
Contributor

Hi @chielingyueh ,
It could be caused by some issues in tf2onnx. You can try to compare each layer's output to find out the suspicious layer.
And please attach the tensorflow model or show how the tensorflow model is created so that others can help with that.

@jdxyw
Copy link

jdxyw commented Apr 24, 2023

I have the same issue.

OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): macOS 13.2.1
TensorFlow Version: 2.11.0
Python version: 3.7
ONNX version (if applicable, e.g. 1.11*): 1.14.0

file_path = "{}/onnx_model/{}_{}.onnx".format(self.output_path, "model", epoch)
spec = (tf.TensorSpec((1, unified_config.list_size, len(unified_config.feature_cols)), tf.float32, name="input"),)
tf2onnx.convert.from_keras(self.model, input_signature=spec, output_path=file_path)

Above is how I export the model to ONNX.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

3 participants