Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attention seq2seq need #164

Closed
superhy opened this issue Jun 14, 2017 · 3 comments
Closed

attention seq2seq need #164

superhy opened this issue Jun 14, 2017 · 3 comments

Comments

@superhy
Copy link

superhy commented Jun 14, 2017

No description provided.

@zsdonghao
Copy link
Member

i mark this issue as help wanted, if any one used TF's dynamic RNN encoder and attention seq2seq, feel free to contribute ~

@superhy
Copy link
Author

superhy commented Jun 19, 2017

if I want to input the embedding matrix which trained by myself with the other API(like Gensim), I don't want use embedding layer in TL even TF, I want call the clear attention seq2seq or Peekly seq2seq directly.

@superhy
Copy link
Author

superhy commented Jun 19, 2017

In the other words, EmbeddingAttentionSeq2seqWrapper function which like the project "easy_seq2seq" is too low in TF-version

@luomai luomai closed this as completed Feb 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants