Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss just not converge when adding a L2 norm layer after Encoding layer #43

Closed
tengteng95 opened this issue Apr 25, 2018 · 2 comments
Closed

Comments

@tengteng95
Copy link

Hi, I am very interested in your great work. I try to make use of Encoding Layer in my own dataset, but I found that training loss just not decrease(so so slow) when L2 norm layer is used after Encoding Layer.

Could you please tell me what role the L2 norm plays in your experiments and whether it fluences the performance or not?

@zhanghang1989
Copy link
Owner

It depends on the scale of your experiments. For cifar experiment with 10 classes, L2 norm works best. For large scale problem such as ImageNet, please try using BatchNorm.

@tengteng95
Copy link
Author

Great thanks to you. BN does alleviate the problem a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants