Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If I don't want use bias on Conv2D, What can i do? #80

Open
passingduck opened this issue Aug 20, 2020 · 8 comments
Open

If I don't want use bias on Conv2D, What can i do? #80

passingduck opened this issue Aug 20, 2020 · 8 comments

Comments

@passingduck
Copy link

I have tried a lot of time for using bias on Conv2d.(take weights at ckpt trained with tensorflow_1.15)
(very dirty code....sorry)
image
image

First I changed nnom_utils.py. Erased all c_b line, and Changed layer.set_weights([c_w]).
And Changed related Length of convolution layer.

Twice I added zeros bias, But np.log2 functions make error.
So I added if statement like 'if min_value'==0: min_value=1e-36'

Both ways make this problem.
The First evaluate_model function works fine
image
But if I do the generate_model function and then do the evaluate function again, the evaluate_model function doesn't work fine.
image
And I'll show you a picture of the results on the board.
image
Thank you for reading the long article, and I apologize for terrible English and coding skills.
Safe Safety.

@majianjia
Copy link
Owner

Hi passiduck,
Thanks for your interest. The simple answer is you have to use bias in both conv/dense layers.
This is restricted by the backend which is the CMSIS-NN.
Any reason you dont want to use bias? I might add this feature later if it is necessary.
Thanks,
Jianjia

@passingduck
Copy link
Author

passingduck commented Aug 20, 2020

Thanks for the quick reply.
I tried to use an existing trained model. Also beta in BatchNormalization works the same way, I read that no bias is needed.
If it is added in the future, I would appreciate it. Have a nice day.
Thanks!

@passingduck
Copy link
Author

Hi jianjia.
I used nnom after I added bias,But I had same issues.
Can you infer any possible reason for this problem to occur?
Before generate_model image
image
After generate_model image
image

After using generate_model function, the first layer's weights changes like this. Could this be a problem?
Before
image
After
image

Thanks, Have a great weekend!
passingduck

@majianjia
Copy link
Owner

The generate_model() will destroy the Keras's model, because the quantisation is working on the same instance. So you might copy the model before passing it to generate_model() or reload it from file.
Hope it helps

@piyush-das
Copy link

Hi @majianjia For support of legacy models, would it be possible to add support for having a no bias in the Conv Layer, or alternatively if their is a hacky way to implement the same, could you please give pointers.

Thanks.

@majianjia
Copy link
Owner

nnom doesn't support bias because the backends must have one to work properly currently.

You must add it somewhere. There are 2 ways to do it.

  1. modify nnom conv layers, when there isn't bias available, malloc a chunk of memory at the size of output, set it to 0. use it as bias.
  2. In python code, after loading model (the model after training.). Add a bias tensor to the layer that doesn't have bias. Set it to the correct shape with all values zero.

Hope it helps

@piyush-das
Copy link

piyush-das commented Dec 28, 2021

Thanks for the pointers.
I tried option 2. However, it resulted in

File "/home/xyz/nnom/scripts/nnom.py", line 216, in find_dec_bits_max_min
  int_bits = int(np.ceil(np.log2(max(max_val, min_val))))
OverflowError: cannot convert float infinity to integer

Then just for cleaning the flow , I also introduced a bias_initializer as "random_normal", so that the max_val, min_val don't reduce to 0 for the bias layers. However, then I got the following error :

File "/home/xyz/nnom/scripts/nnom.py", line 731, in quantize_weights
    f.write('#define ' + layer.name.upper() + '_BIAS_LSHIFT '+to_cstyle(bias_shift) +'\n\n')
UnboundLocalError: local variable 'bias_shift' referenced before assignment

Trying to figure out the reason for the above.

@piyush-das
Copy link

It appears that nnom uses the name of the layers instead of the instance type. In the case that the layers have a different nomenclature, then the bias_shift variable doesn't get assigned.

jonnor added a commit to jonnor/embeddedml that referenced this issue Apr 7, 2024
Currently fails to generate a model

    File "../nnom-models/auto_test/venv/lib/python3.11/site-packages/nnom/nnom.py", line 731, in quantize_weights
      f.write('#define ' + layer.name.upper() + '_BIAS_LSHIFT '+to_cstyle(bias_shift) +'\n\n')
                                                                        ^^^^^^^^^^
   UnboundLocalError: cannot access local variable 'bias_shift' where it is not associated with a value

MobileNet does not use bias for Conv layers. Currently that is not supported in NNoM,
and might be the reason of this error.

Ref majianjia/nnom#80
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants