Skip to content
This repository has been archived by the owner on Jun 10, 2021. It is now read-only.

Potential bug in Encoder backward for uneven batches: #306

Closed
quanpn90 opened this issue Jun 1, 2017 · 1 comment
Closed

Potential bug in Encoder backward for uneven batches: #306

quanpn90 opened this issue Jun 1, 2017 · 1 comment

Comments

@quanpn90
Copy link

quanpn90 commented Jun 1, 2017

Hi,

I tried to train several models with uneven batches, and noticed something for Encoder going backward:

  • During forward, the finalStates are normally not the last states in the sequence (which is padded to the right).
  • During backward, looks like when we initialize the gradients for the finalStates, we also need to put the gradients into the position where we extracted the finalStates (currently it is default at the last position). Is it correct ?

Best,
Quan

@guillaumekln
Copy link
Collaborator

Hi,

Thanks for reporting, this is indeed an issue.

However, sequences are padded on the left by default. So the problem only occurs when using -brnn where the input sequences are reversed.

I'm currently refactoring the padding management. I should take the opportunity to fix this inconsistency as well.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

2 participants