New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
round_to_fixed unsigned tensors #18
Comments
Hi @mengjingyouling You are right, the code does deal with signed tensors. What I meant from the comment is that it is a This is because input activations to convolution are usually output of previous relu activations, and hence are either 0 or positive. Pleasen don't hesitate to follow up or ask any more questions. |
Thanks @mostafaelhoushi . If we want to use FPGA to prove the power consumption and speed advantage of your DEEPSHIFT method. Thanks @mostafaelhoushi |
I am personally not experienced in FPGA but there are several papers that implemented shift convolution on FPGA, I suggest to look at them: |
thanks |
@mengjingyouling : I just noticed this new paper that describes a FPGA implementation based on DeepShift: https://ieeexplore.ieee.org/document/10005141 |
hi, @mostafaelhoushi
The input x is converted to 32bit fixed point in you paper,as follows:
`def round_to_fixed(input, integer_bits=16, fraction_bits=16):
assert integer_bits >= 1, integer_bits
# TODO: Deal with unsigned tensors where there is no sign bit
# which is the case with activations to convolution that
# are usually the output of a Relu layer
if integer_bits == 1:
return torch.sign(input) - 1
delta = math.pow(2.0, -(fraction_bits))
bound = math.pow(2.0, integer_bits-1)
min_val = - bound
max_val = bound - 1
rounded = torch.floor(input / delta) * delta
In the annotation of this function, it is said that this function is about unsigned tensor.
But we think it is about signed tensor. For example:
signed int8=[-128,127]
round_to_fixed(-128, integer_bits=8, fraction_bits=8) =-128
Are we right? Thank you.
The text was updated successfully, but these errors were encountered: