You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
In the paper we compared pretraining the full model (--reload_model) vs pretraining embeddings only (--reload_emb) and we found that pretraining the embeddings doesn't work as well, so we didn't put any examples of this in the repo. You could pretrain the embeddings before you pretrain the language model, but this wouldn't be useful since training the cross-lingual language model already provides high quality cross-lingual embeddings.
First, thanks for sharing your code!
I really appreciate it.
I have a question about pre-trained word embeddings for unsupervised NMT task.
While reviewing code, I could find out that you guys never used pre-trained word embeddings.
(since --reload_emb is empty)
If this is true that pre-trained word embeddings has not beed used, is there a specific reason for not using pre-trained word embeddings?
Thank You!
The text was updated successfully, but these errors were encountered: