-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New models #18
Comments
At the moment we focus on training models for the Tatoeba MT Challenge that we released recently (https://github.com/Helsinki-NLP/Tatoeba-Challenge). There will be some updated models there. Check it out. Otherwise, we will continue updating existing language pairs but progress may be slow as training requires a lot of resources and time. I cannot promise new models frequently. |
And, yes, the trick to improve models is to train more. SentencePiece based segmentation is also useful and some other smallish improvements in data pre-processing. |
Oo, great! Very thanks again for the Tatoeba-Challenge project! Recently you published a Spanish-to-English and other models that we need!
And, of course, back-translation. I noticed that you do something with back-translation. There is another facebook article with details: https://arxiv.org/abs/1808.09381. Only this step allows them to improve BLUE on 4 points. |
Yes, I do apply language identification in the new Tatoeba-MT models and some other basic filtering. Length-ratio filtering has always been part of the pipeline. This is a very well-known since old SMT times and Moses tools. However, I am not as strict as the paper suggests. There is a lot of hyper-parameters that can be optimized for each language pair. Backtranslation is part of all models that include "+bt" in their string. I need to stress that the OPUS-MT models are not tuned towards news translation from the WMT tests. It is not surprising if their are performance differences as simple domain-adaptation boosts the performance a lot. I will try to also include some fine-tuned models later. A finetuning framework is already integrated in OPUS-MT By the way, it's a bit funny that most people point to Facebook/Google papers when they refer to techniques developed and proposed by researchers in academia. I guess that universities have to improve their PR units ... |
Thanks for all of these models! Sometimes it works comparable with Google Translate!
I noticed that you improve a model for French and several other languages. Do you have plans to do the same for es-en, pt-en, da-en, it-en pairs?
And what was the trick that improved results?
The text was updated successfully, but these errors were encountered: