Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fallback to float conv2d due to weight divergence #164

Closed
machcchan474 opened this issue Aug 19, 2020 · 2 comments
Closed

Fallback to float conv2d due to weight divergence #164

machcchan474 opened this issue Aug 19, 2020 · 2 comments

Comments

@machcchan474
Copy link

What can I do to fix this?

@AIWintermuteAI
Copy link

From my own experience I only encountered this problem once - the workaround I had to use to avoid it was to train from scratch(and not to use imagenet weights)
In docs for nncsse you can find the following parameters for converter:
https://github.com/kendryte/nncase/blob/master/docs/USAGE_EN.md

--dump-weights-range
which shows the weight range - on some layers you'll see quite a difference, that is the large divergence converter is complaining

--weights-quantize-threshold
the threshold to control quantizing op or not according to it's weights range, default is 32.000000. you can increase it to accommodate for weight divergence in your model. however for me when I tried it it resulted in bad model.

Both of these options are available in nncase2beta4. You'll need to perform the conversion manually.

@sunnycase
Copy link
Member

Latest nncase removed this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants