-
-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use attention_3d_block in many to many mapping #7
Comments
@Opdoop can you please share the minimal code to reproduce your error? |
@philipperemy
Module is :
And run the model as:
|
In that case, what you need to have is a sequence to sequence attention. This project does not support this. |
Ooooooooh. Thanks a lot. |
I'm not sure about this. This project: is the most famous seq2seq in Keras. Maybe there's an attention mechanism there. To be checked |
Yes it does have one! |
Hi, I'm beginner of Keras and tring to use attention_3d_block in translation module.
I have input of 5 sentences, each sentences has padding to 6 words, each word is presented in 620 dim(as embedding dim).
And the output is 5 sentences, sentences padding to 9 words, and word is presented in 1-of-k in 30 dim(as vocabulary size)
How to use attention_3d_block in this scenario as the LSTM is many to many?
The text was updated successfully, but these errors were encountered: