Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convolution after ReLU in Dense Layer Question #17

Closed
cgarciae opened this issue Jun 27, 2017 · 3 comments
Closed

Convolution after ReLU in Dense Layer Question #17

cgarciae opened this issue Jun 27, 2017 · 3 comments

Comments

@cgarciae
Copy link

cgarciae commented Jun 27, 2017

I've seen that you use:

BN -> ReLU -> Conv3x3 -> Dropout

on the normal case, or

BN -> ReLU -> Conv1x1 -> Dropout -> BN -> ReLU -> Conv3x3 -> Dropout

when using bottleneck. The question is why? Most networks use e.g.

Conv3x3 -> BN -> ReLU -> Dropout

Why did you invert the order? Did you get better results this way?

Thanks in advance!

@liuzhuang13
Copy link
Owner

Yes, we found using the current order gives a higher accuracy typically. The only difference between two orders in DenseNet is that, the first BN layer has scaling and shifting parameters which provide later layers different activation scales. If we use CONV first, the convolutions are forced to receive the same activations in different subsequent layers, which may not be a good thing for training.

@cgarciae
Copy link
Author

@liuzhuang13 Thanks for the response! Excellent insight, so if I understand correctly:

  • You perform BN first because, given that it might learn different parameters, the ReLU activation might be different for each layers, meaning that each layer will see different versions of the same features.
  • Another way of seen it is that you delegate the BN + Activation to the upper layers.

I think you could mention this more in the paper. Reading it more closely you do reference the Microsoft paper but don't comment about it.

Thanks again!

@liuzhuang13
Copy link
Owner

If by "upper layers" you mean "deeper layers (layers farther from input)", I think we understand it in the same way. Thanks for the suggestion! If there's a newer version of the paper we'll consider mentioning this more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants