Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conv-ReLU-BN issue #39

Open
mileyan opened this issue Oct 28, 2019 · 1 comment
Open

Conv-ReLU-BN issue #39

mileyan opened this issue Oct 28, 2019 · 1 comment

Comments

@mileyan
Copy link

mileyan commented Oct 28, 2019

I just found this code use conv-relu-bn. However it should be conv-bn-relu. Could you please fix it.

@semin-park
Copy link

https://www.reddit.com/r/MachineLearning/comments/67gonq/d_batch_normalization_before_or_after_relu/

I'm also curious about this. Could someone try both and report the performances?

yan12125 pushed a commit to yan12125/MAML-Pytorch that referenced this issue Jul 9, 2021
See dragen1860#39

On the paper [1], BN comes before ReLU

> Our model follows the same architecture as the embedding function
> used by Vinyals et al. (2016), which has 4 modules with a 3 × 3
> convolutions and 64 filters, followed by batch normalization
> (Ioffe & Szegedy, 2015), a ReLU non-linearity, and 2 × 2 max-pooling.

I don't understand where the MaxPool layer should be, so I skip them for
now.

[1] https://arxiv.org/pdf/1703.03400.pdf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants