Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about tan(s) #9

Open
lcdevelop opened this issue Oct 12, 2016 · 4 comments
Open

about tan(s) #9

lcdevelop opened this issue Oct 12, 2016 · 4 comments

Comments

@lcdevelop
Copy link

hi, sorry to bother you again
I read this line in your code: self.state.h = self.state.s * self.state.o
but when I found in the paper, it saids may be like this:
self.state.h = np.tanh(self.state.s) * self.state.o
would you tell me which one is right?

@xylcbd
Copy link

xylcbd commented Oct 19, 2016

the second.

@ScottMackay2
Copy link
Contributor

ScottMackay2 commented Jan 21, 2017

Quote from the paper:
"It is customary that the internal state first be run through a tanh
activation function, as this gives the output of each cell the same dynamic
range as an ordinary tanh hidden unit. However, in other neural network
research, rectified linear units, which have a greater dynamic range, are
easier to train. Thus it seems plausible that the nonlinear function on the
internal state might be omitted."

But with the current example code it seems like adding tanh will result in a better result. Still both results are quite accurate:
With tanh (100 iterations), loss: 6.31438767294e-07
Without tanh (100 iterations), loss: 2.61076356822e-06

(Note: do not confuse this tanh with the tanh at the input a.k.a. LstmState.g)

tl;dr: Without or with tanh() is both possible.

@ScottMackay2
Copy link
Contributor

ScottMackay2 commented Jan 21, 2017

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

@ZhangPengB
Copy link

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

hello. I have got a lot after read your commit,but i hanve a question here ,if we and tanh , the first should be:
ds = self.state.o * top_diff_h * (1 - np.tanh(top_diff_s) ** 2)+top_diff_s;
i think it is .Welcome to discuss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants