-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difference between 'dropout' and 'backprop' arguements in script #12
Comments
The code always builds a theano graph for both dropout and non-dropout versions of the network. The choice between dropout and no dropout at training time is made here https://github.com/mdenil/dropout/blob/master/mlp.py#L260 and here https://github.com/mdenil/dropout/blob/master/mlp.py#L314 For an explanation of dropout you should read the arxiv paper linked in the readme. |
I think I shouldve phrased the question better. Even in your backprop network you have shared weights with the Dropout Layers as given here https://github.com/mdenil/dropout/blob/master/mlp.py#L130. Is there some motive behind this? Normally in backprop hidden layers would have automatically initialized their own weights without passing the values of W and b manually from the dropout layer, Right? |
The dropout and non dropout layers share weights so that the same network can be evaluated both with and without dropout. If you compute The computational graph looks like this:
This means that (when |
Thanks for the explanation! Its definitely more comprehensible now. Whats the reason though for allowing someone using dropout to compute costs from the left pathway as well? |
I use the right plathway to compute test error. |
I'm not clear regarding the difference between dropout and backprop? Even in backprop you have used dropout layers? What exactly is the difference between the two? Which is better?
The text was updated successfully, but these errors were encountered: