You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@HotMarijke
tiny-dnn's batch normalization is ported from caffe's one, which don't have scale and bias operation. We already have scaling and shifting layer named linear_layer, and if we can train it's scale and bias, we can emulate the original paper by combine them.
Why doesn't the batch normalization layer apply scaling and shifting like in the original paper?
thanks,
Marijke
The text was updated successfully, but these errors were encountered: