You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my own experience I only encountered this problem once - the workaround I had to use to avoid it was to train from scratch(and not to use imagenet weights)
In docs for nncsse you can find the following parameters for converter: https://github.com/kendryte/nncase/blob/master/docs/USAGE_EN.md
--dump-weights-range
which shows the weight range - on some layers you'll see quite a difference, that is the large divergence converter is complaining
--weights-quantize-threshold
the threshold to control quantizing op or not according to it's weights range, default is 32.000000. you can increase it to accommodate for weight divergence in your model. however for me when I tried it it resulted in bad model.
Both of these options are available in nncase2beta4. You'll need to perform the conversion manually.
What can I do to fix this?
The text was updated successfully, but these errors were encountered: