New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'clipping_threshold' LSTM parameter same at 'clip_gradient' Caffe parameter? #18
Comments
I think they are different. As far as I know, the Caffe main code scales the whole gradient based on L2-norm. |
@junhyukoh I have yet to find the right 'clip_gradient' value with caffe mainline. I have tried values between 1 and 10 but they do not reproduce the signal as faithfully as your 'clipping_threshold' value (of 0.1). Any guidance would be very valuable. I'm using the simple single-stack lstm signal following example. Thank you. p.s. do you think gradient bounding should be an option in addition to gradient scaling? |
Are you initializing the bias of the forget gate to some large positive value (say 5)? |
Thank you for looking into this. Yes, that was very clear from the clockwork rnn paper (and your code). I've attached my entire python script. Kindly take a look.
The prototxt file below inputs the data in the 'proper' Caffe way. |
Did you solve this? What do I use in place of clip_gradients: 0.1 ? Are you able to share the code which you used? Also could you share your test solver for the toy example? |
Probably not to my satisfaction (bit dated and don't remember). I now use Keras/TF mostly, and you can find lots of links on clipping. All the best.
From: Raaj <notifications@github.com>
To: junhyukoh/caffe-lstm <caffe-lstm@noreply.github.com>
Cc: aurotripathy <ipserv@yahoo.com>; State change <state_change@noreply.github.com>
Sent: Tuesday, March 6, 2018 10:26 AM
Subject: Re: [junhyukoh/caffe-lstm] 'clipping_threshold' LSTM parameter same at 'clip_gradient' Caffe parameter? (#18)
Did you solve this?—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Ic, I have implemented the test model as well following your example. The training loss looks to be reducing as well. However, when I run it, the prediction always gives the same output. I am not sure why this is the case. If you have any ideas why that might occur, I would like to know. The the lstm_deploy model, I merely changed the shape from 320 to 2 |
@junhyukoh
I'm porting your simple LSTM example to the Caffe mainline tree. As expected some keywords and parameters are different as the implementations were independently developed.
My question is about the
clipping_threshold
parameter.In your lstm implementation, I see (in the backward lstm computation):
I don't see this in the caffe mainline code. Here, the clip_gradient is converted into a scale_factor.
Dtype scale_factor = clip_gradients / l2norm_diff;
Is it the same parameter? Does it have the same effect? Is one the scaled version of the other?
Could you help with your insight?
Thank you
Auro
The text was updated successfully, but these errors were encountered: