Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Mobile] [Python][MacOS] Why outputs so different between android and python, even if inputs and the model.onnx are the exactly same? #19449

Closed
thelou1s opened this issue Feb 7, 2024 · 3 comments
Labels
platform:mobile issues related to ONNX Runtime mobile; typically submitted using template stale issues that have not been addressed in a while; categorized by a bot

Comments

@thelou1s
Copy link

thelou1s commented Feb 7, 2024

Describe the issue

Hi,
Why outputs so different between android and python, even if inputs and the model.onnx are the exactly same?
I checked python onnx runtime version 1.16.3, checked android onnx runtime version 1.16.3. But got so diff outputs on same inputs. please help, thank you.

image

image

image

To reproduce

1.run python to predict a wav file
2.run android app to predict the same wav file
3.get so diff outputs

Urgency

No response

Platform

Android

OS Version

Android 10

ONNX Runtime Installation

Released Package

Compiler Version (if 'Built from Source')

1.16.3

Package Name (if 'Released Package')

onnxruntime-android

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

C++/C

Architecture

ARM64

Execution Provider

Other / Unknown

Execution Provider Library Version

No response

@thelou1s thelou1s added the platform:mobile issues related to ONNX Runtime mobile; typically submitted using template label Feb 7, 2024
@skottmckay
Copy link
Contributor

Different numbers are expected especially if the model involves lots of floating point operations (e.g. look for MatMul/Gemm/Conv operations in the model)

The order of the individual operations will affect the exact value produced by each node. There are many additions and multiplications in a single MatMul and MatMul is used in Gemm and Conv and other operations. The order of each set of multiplications matter. The order those are added together matter. But there's no rule about the order. i.e. mathematically a x b x c == c x b x a, but you'll get two different results due to how floating-point numbers work.

The low-level instructions used to execute the operations differ by platform/architecture (e.g. various AVX instruction sets on intel/amd, NEON on arm, etc.). These differences accumulate with each node and are magnified by nodes that do a lot of calculation (e.g. MatMul/Conv/Gemm).

All these things contribute to differences when the same model is run with the same input on different platforms or with different execution providers on the same platform.

Are the predictions after converting the raw float outputs into something meaningful different?

@tianleiwu
Copy link
Contributor

tianleiwu commented Feb 7, 2024

Before investigation the difference, it is recommended to run end to end accuracy with a measurement set. Do the following only if there is significant accuracy difference.

The difference looks larger than expected from first sight (typical NN model with proper layer normalization will not have such large different). To investigate, it need to get the intermediate output of each node, then it is easy to find out the first node that causes major difference (for example, absolute diff > 0.001) between android and python.

There are two ways to get node outputs:
(1) Build from source to enable node inputs/outputs dumping: https://onnxruntime.ai/docs/build/inferencing.html#debugnodeinputsoutputs
(2) Add some node outputs to graph outputs, and run inference like this

Copy link
Contributor

github-actions bot commented Mar 8, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Mar 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:mobile issues related to ONNX Runtime mobile; typically submitted using template stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

3 participants