Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge difference between the output from tflite model and kmodel #114

Closed
zye1996 opened this issue Apr 17, 2020 · 5 comments
Closed

Huge difference between the output from tflite model and kmodel #114

zye1996 opened this issue Apr 17, 2020 · 5 comments

Comments

@zye1996
Copy link

zye1996 commented Apr 17, 2020

Hi,

I am trying to convert a tflite model to kmodel but during testing the output from both models, I found a huge difference after conversion.

Here is my tflite model and converted kmodel.
Archive.zip
I used command
ncc compile model.tflite model.kmodel -i tflite -o kmodel -t k210 --inference-type uint8 --dataset images --input-mean 0.5 --input-std 0.5 for the conversion.

And my input range originally being [-1, 1]

Also I tried float inference and the result is the same. The output from kmodel is of range 10^2 while the output from tflite model is 10^-2

@sunnycase
Copy link
Member

Have you tested with the CI version?

@zye1996
Copy link
Author

zye1996 commented Apr 17, 2020

Have you tested with the CI version?

I tried all the versions under release built for macOS but none of them works for me. Is there another version available or I have to build from source?

@sunnycase
Copy link
Member

@zye1996
Copy link
Author

zye1996 commented Apr 17, 2020

CI from master
https://dev.azure.com/sunnycase/nncase/_build/results?buildId=156&view=artifacts&type=publishedArtifacts

I tested with the CI verision and the compiled model works fine. However when I did the compilation, it throws out warning:

WARN: Conv2D_3 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_5 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_13 Fallback to float conv2d due to weights divergence.
WARN: Conv2D_21 Fallback to float conv2d due to weights divergence.

I would assume it will degrade the performance of using kpu, is there any solution to this?

@zye1996
Copy link
Author

zye1996 commented Apr 17, 2020

Figured out with option --weights-quantize-threshold

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants