You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 10, 2021. It is now read-only.
Hi, thanks for releasing such a beautiful toolkit !
This library seems like follow https://github.com/harvardnlp/seq2seq-attn,
however compare to seq2seq-attn, cnn-based character-aware encoder is not available.
Is there any plan to add such feature(cnn) in the future?
The text was updated successfully, but these errors were encountered:
We don't have plans to do this in the short-term, as it complicates the input and output formatting. An alternative approach is to use subword encodings like BPE which can be wrapped around the whole system. We will release tools to do this in the near future, but in the mean time there are several that exist online (https://github.com/vteromero/byte-pair-encoding)
@srush Thanks, unfortunately in my language pair, applying BPE is not convenient.
I just implemented it myself, and also advanced Bayesian dropout, A Theoretically Grounded Application of Dropout in Recurrent Neural Networks and still tuning.
I am wondering why don't add this version of dropout? Paper above reports better performance than current Zaremba's dropout.
I found it can be implemented in a very simple way. Just rewrite nn.Dropout to force noise be fixed during all time steps, and tie noise tensors between clones, and also edit LSTM.lua to apply dropout to all nodes which feed to nn.Linear Nodes. I will share the result when the exp is complete.
Hi, thanks for releasing such a beautiful toolkit !
This library seems like follow https://github.com/harvardnlp/seq2seq-attn,
however compare to seq2seq-attn, cnn-based character-aware encoder is not available.
Is there any plan to add such feature(cnn) in the future?
The text was updated successfully, but these errors were encountered: