You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my opinion, the factor could be separate from the binary weight when doing the convolution. It's like Binary operations for the most part and floating operations for the rest factors
In my opinion, the factor could be separate from the binary weight when doing the convolution. It's like Binary operations for the most part and floating operations for the rest factors
Yeah, that's true. Scaling factors can be applied after the efficient binary convolution is done.
Hi, author. Thanks for the nice work!
有个问题想请教下作者。看到HardBinaryConv这段代码表示一个二值卷积,但是打印了权重,发现大概是0.005,0.006等等这样的数。看到他代码给二值化卷积torch.sign(real_weights)前面乘了个scaling_factor,这样好像就又变成32位浮点数了。我不知道我这样的理解对不对
The text was updated successfully, but these errors were encountered: