You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes, we found using the current order gives a higher accuracy typically. The only difference between two orders in DenseNet is that, the first BN layer has scaling and shifting parameters which provide later layers different activation scales. If we use CONV first, the convolutions are forced to receive the same activations in different subsequent layers, which may not be a good thing for training.
@liuzhuang13 Thanks for the response! Excellent insight, so if I understand correctly:
You perform BN first because, given that it might learn different parameters, the ReLU activation might be different for each layers, meaning that each layer will see different versions of the same features.
Another way of seen it is that you delegate the BN + Activation to the upper layers.
I think you could mention this more in the paper. Reading it more closely you do reference the Microsoft paper but don't comment about it.
If by "upper layers" you mean "deeper layers (layers farther from input)", I think we understand it in the same way. Thanks for the suggestion! If there's a newer version of the paper we'll consider mentioning this more.
I've seen that you use:
on the normal case, or
when using bottleneck. The question is why? Most networks use e.g.
Why did you invert the order? Did you get better results this way?
Thanks in advance!
The text was updated successfully, but these errors were encountered: