Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SRU Module Doesn't appear to Use Residual/skip connections #10

Open
NickShahML opened this issue Sep 13, 2017 · 5 comments
Open

SRU Module Doesn't appear to Use Residual/skip connections #10

NickShahML opened this issue Sep 13, 2017 · 5 comments
Labels

Comments

@NickShahML
Copy link

@taolei87 thanks for the repo again. Really good code.

One thing that I was analyzing was that it doesn't seem that the sru class doesn't have skip connections. Shouldn't it be:

        prevx = input
        lstc = []
        for i, rnn in enumerate(self.rnn_lst):
            h, c = rnn(prevx, c0[i])
            prevx += h #you have prevx = h

In this way the connections are residual which is useful for stacking multiple layers.

@taolei87
Copy link
Contributor

hi @NickShahML

we use highway connections (Eq.7) instead of identity connections (residual).
this is implemented in the CUDA code.

comparing highway with identity (or the version w/o any skip connections) is a TODO.

I would love to hear feedback from you as well :). thanks!

@taolei87
Copy link
Contributor

similar question #9

@NickShahML
Copy link
Author

Gotcha @taolei87 . I'll need to modify the cuda code to do the residual adding as I suggested above. Right now I don't have the time, but I can't imagine it being too difficult. In my experience, residual connections always perform better than highway connections for RNNs and are much cheaper.

@taolei87
Copy link
Contributor

@NickShahML I tried a bit residual in ICML language modeling task. The training loss decreases much slower compared to using highway. so I stopped given time & resource constraints.

Of course I might not be doing this very carefully or thoroughly. would love to hear your feedback. thanks!

@NickShahML
Copy link
Author

NickShahML commented Sep 15, 2017

@taolei87 Thanks for the update. Unfortunate that you're getting this result. I looked at your commits and I couldn't find where you implemented this change. DO you mind pushing the code so that I can check your implementation?

Basically, each subsequent layer should have an element-wise addition from the past layer's input.

Another avenue that I think could be extremely powerful is to do self attention at each layer. It would be best to do multiplicative attention with 8 heads as they do in this paper:

https://arxiv.org/abs/1706.03762

The idea is this:

output = SRUCell_Zero(input)
output += self_attention(output)/tf.sqrt(num_nuerons) #8 heads concatenated. These are then added element wise to the output
output += SRUCell_One(output)
# Repeat attention and cell depending on how many layers you want.

The idea here is we can attend to multiple parts of the input in parallel which is computational very fast. One thing we would need to specify is to whether we mask future inputs to attend to. If you're doing a language modeling task for example, the network can just memorize the future inputs with this attention mechanism. However, if you're doing a classification task, then masking is not needed at all since the sequence is already generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants