Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add explanation of omission of bias vectors from regularization in doc/mlp.txt #189

Open
aschmied opened this issue Jun 7, 2017 · 0 comments

Comments

@aschmied
Copy link

aschmied commented Jun 7, 2017

Working through the Multilayer Perceptron tutorial (http://deeplearning.net/tutorial/mlp.html) I noticed the omission of the bias vectors from the L1, L2 regularization calculations. The linked explanation of regularization indicates that the regularization term is calculated over the entire parameter vector (http://deeplearning.net/tutorial/gettingstarted.html#l1-l2-regularization).

It's easy enough to Google for an explanation of this, but it would be helpful if the omission of the bias vectors were explained directly in the MLP tutorial.

@aschmied aschmied changed the title Add explanation of ommision of bias vectors from regularization in doc/mlp.txt Add explanation of omission of bias vectors from regularization in doc/mlp.txt Jun 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant