Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed-point model numbers #52

Closed
laski007 opened this issue Feb 28, 2018 · 4 comments
Closed

Fixed-point model numbers #52

laski007 opened this issue Feb 28, 2018 · 4 comments

Comments

@laski007
Copy link

Dear Prof. Wang,
For the quantized parameter, you used (n,m) pair to denote the precision. For example, in VGG-16 1st layer, you used (8,7) (8, 0) (8,-2) to denote frac_w, frac_input and frac_output, for last FC layer, you used (8, 2) (8,2) (4,7). Is there any rules or constrains how to decide the numbers? Or just use any numbers you like? Can I change to other numbers? If I changed the fraction numbers and convert a new model, does it works or not? Thank you so much.

@laski007
Copy link
Author

laski007 commented Feb 28, 2018

Okay, I find
rule #1, previous layer's output fraction number must equal to next layer's input fraction number.
rule #2, 1st layer's input fraction number is 0.
Anything else? Why in AlexNet the 1st layer's output fraction is (8, -4) but in VGG16 the output is (8, -2)? For the weight fraction, sometime it is (8,7), sometime it is (8,8) or (4,7) or other numbers such as (8,9) (8,10) (8,11), why?
Because I have trained my own model with different number of Convolution layers and FC layers but I don't how to set the weight fraction numbers to convert my own model.
Many thanks.

@laski007
Copy link
Author

laski007 commented Mar 7, 2018

Dear Prof. Wang,
Could you please help me to briefly describe how to decide the frac_w, frac_input and frac_output numbers when you have spare time? I'm still waiting ... Thank you so much. 跪谢Orz

@aazz44ss
Copy link

aazz44ss commented Mar 8, 2018

frac_w, frac_input and frac_output depends on the distribution of your data.
If your weight is range from -32.0 to 32.0, like -19.83, 2.15, -5.14, 6.81, 21.02, 32.11.....
you should choose frac_w of 2 (1 signed bit, 5 integer bits, 2 fractional bit)
you have to use 5 integer bits to hold your range of weight. ( 2^5 = 32 )
every number larger than 32 will become 32. (you can see conv.cl code)
and if your weight is range only from -1 to 1, you don't need so many integer bits,
so you can increase your fractional bit to increase precision.

fran_input and frac_output done by the same way.
you have to try fractional bit with different layers, and see the accurate with your test data set.

the description of how to choose fractional bit is in the latest PipeCNN paper just in the document folder.

@doonny
Copy link
Owner

doonny commented Mar 12, 2018

Hi, all, please refer to matlab's fixed-point number's defination in the Fixed-point Toolbox. We are using a very similar data format.

@laski007 laski007 closed this as completed Apr 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants