Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between 'dropout' and 'backprop' arguements in script #12

Closed
saatvikshah opened this issue Feb 17, 2015 · 5 comments
Closed

Comments

@saatvikshah
Copy link

I'm not clear regarding the difference between dropout and backprop? Even in backprop you have used dropout layers? What exactly is the difference between the two? Which is better?

@mdenil
Copy link
Owner

mdenil commented Feb 17, 2015

The code always builds a theano graph for both dropout and non-dropout versions of the network. The choice between dropout and no dropout at training time is made here https://github.com/mdenil/dropout/blob/master/mlp.py#L260 and here https://github.com/mdenil/dropout/blob/master/mlp.py#L314

For an explanation of dropout you should read the arxiv paper linked in the readme.

@mdenil mdenil closed this as completed Feb 17, 2015
@saatvikshah
Copy link
Author

I think I shouldve phrased the question better. Even in your backprop network you have shared weights with the Dropout Layers as given here https://github.com/mdenil/dropout/blob/master/mlp.py#L130. Is there some motive behind this? Normally in backprop hidden layers would have automatically initialized their own weights without passing the values of W and b manually from the dropout layer, Right?

@mdenil
Copy link
Owner

mdenil commented Feb 17, 2015

The dropout and non dropout layers share weights so that the same network can be evaluated both with and without dropout. If you compute dropout_cost then you get a forward pass with dropout applied, but if you compute cost then you get a forward pass through the same network with no dropout (and with appropriately scaled weights).

The computational graph looks like this:

  cost/errors    dropout_cost/dropout_errors
      |              |
HiddenLayers   DropoutHiddenLayers <--- these share weights
       \__     _____/
          Input

This means that (when dropout=True) we can differentiate with respect to the right pathway to get gradients (https://github.com/mdenil/dropout/blob/master/mlp.py#L260) but we can compute test error using the left pathway (https://github.com/mdenil/dropout/blob/master/mlp.py#L239). When dropout=False the right pathway isn't used at all, but the code still builds the whole graph anyway.

@saatvikshah
Copy link
Author

Thanks for the explanation! Its definitely more comprehensible now. Whats the reason though for allowing someone using dropout to compute costs from the left pathway as well?

@mdenil
Copy link
Owner

mdenil commented Feb 17, 2015

I use the right plathway to compute test error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants