-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weight normalization #4
Comments
Hi, When we first test our algorithm without weight normalization, we also find that problem. It seems that the gradients of the clipping parameter in several layers will explode suddenly. Then we tried to use small learning rates for the clipping parameter but we found the performance is not good. Then, we think the problem is the distribution of weights is changing significantly and there are no heuristics that can tell when to increase the LR (to accommodate the shift of the distribution of the weights) or when to decrease the LR (to stabilize the training behavior). Therefore, we come up with a method to normalize weights. Weight normalization is inspired by Batch Normalization in activations, because we find learning the clipping parameter in activations quantization does not have the nan issue. |
Thanks for the answer! |
I think weight normalization cannot be applied to the last layer because the output of the last layer is the output of the network, without BN to standardize its distribution. For the last layer, maybe you can apply the DoReFa scheme to quantize weights and our APoT quantization to activations. |
Thanks for the great work and the clarification on the weight_norm! I want to ask that after applying weight normalization in the real-valued weight, the lr for \alpha should be the same for weight or add some adjustment on the lr and weight_decay on \alpha (like the settings in your commented code ( APoT_Quantization/ImageNet/main.py Line 181 in a818104
|
Hello!
I found that without weight normalization, the network ceases to learn, and the loss is equal to nan. Could you please explain why this is happening and how it can be fixed?
The text was updated successfully, but these errors were encountered: