Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About negative gamma #47

Closed
SeokjuLee opened this issue Jun 5, 2020 · 10 comments
Closed

About negative gamma #47

SeokjuLee opened this issue Jun 5, 2020 · 10 comments

Comments

@SeokjuLee
Copy link

Hi, thanks for your great efforts on this project.
I have a question about the "gamma" parameter.
Is it natural for gamma to be trained with negative value?
Is this result telling that attention has a negative effect?

@harrygcoppock
Copy link

My model often learns a negative gamma. I do not think that this means that it has a negative affect, the most important part is the magnitude. out = input + gamma*attentionOutput.

@SeokjuLee
Copy link
Author

@harrygcoppock Thanks Harry.

@csyhping
Copy link

@harrygcoppock @SeokjuLee Hi, I've got the negative gamma, may I ask what value do you get?

@harrygcoppock
Copy link

sorry I never saved the gamma results and cannot remember. A guess was that it slowing increased to 1 or -1 but again that is just a guess. I think it plateaued at this magnitude.

@csyhping
Copy link

@harrygcoppock thanks for your quick reply, I got the gamma about -0.01, do you think this is normal?

@harrygcoppock
Copy link

The point of the gamma parameter is that at the start of training (when gamma = 0) the model can quickly learn the easier convolutional features. Then the model can slowly introduce information from the attention layer. I recall that the paper argues that this is a somewhat harder task. If your gamma value remains low maybe the model struggles to make use of the attention layer? However this is just speculation. Maybe first investigate the relative magnitudes of the skip connection and the output of the attention layer before gamma is applied (if |output from attention| >>|skip connection| then a gamma value of 0.01 could still be significant?). Finally did the gamma value plateau at this level?

@csyhping
Copy link

@harrygcoppock ,yes the value plateau at this level. I'm a rookie to self-attention, so I have no experience on whether -0.01 is a normal value.

@harrygcoppock
Copy link

I too am relatively new to the field, however if you are interested in finding out why all I can think of is my above point. That and postulating that maybe self attention is not helping you here - maybe the problem set is not a good match or you have initialised your attention layer poorly.

@csyhping
Copy link

@harrygcoppock thanks a lot for your kindly help!I will do more checks. Besiedes, may I ask what should be a normal value of gamma when self-attention works well? 0.1 or any?

@SeokjuLee
Copy link
Author

@csyhping Hi, I can't remember the detailed value, but I think I got a similar result. As Harry commented, the magnitude of gamma gradually increased from zero.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants