Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugs in Seq2Seq model #5

Closed
JianLiu91 opened this issue Jul 2, 2017 · 1 comment
Closed

Bugs in Seq2Seq model #5

JianLiu91 opened this issue Jul 2, 2017 · 1 comment

Comments

@JianLiu91
Copy link

JianLiu91 commented Jul 2, 2017

Hi, the code has a nice abstraction and easy to follow. Thanks!!

However, there are some issues in your implement....

(code)

If you don't pass c_t through a Linear layer from encoder hidden to decoder hidden, then the code crashes. (Encoder and Decoder can have different dimensions)

(code)

When self.decoder.num_layers != 1 the view function will crash because of dimension dis-match.

@MaximumEntropy
Copy link
Owner

Hi,

Thanks for pointing this out, I was working on fixing these in the refactor branch. I should merge refactor into master sometime soon and these issues should be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants