Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

trainging model for languages with great differences #19

Closed
liujiqiang999 opened this issue Feb 28, 2019 · 2 comments
Closed

trainging model for languages with great differences #19

liujiqiang999 opened this issue Feb 28, 2019 · 2 comments

Comments

@liujiqiang999
Copy link

Thanks for you work. I have a question. When I training for languages with great differences, such as Chinese-English, English-Kazakh. Is it a good choice to share all parameters? I notice that XLM usually share all parameters.

@glample
Copy link
Contributor

glample commented Feb 28, 2019

Hi, yes I don't think this is a problem. More and more studies have shown that a single model can handle multiple languages even when these are very different. Of course, if there are very few anchor points it is better to have parallel data to facilitate the alignment.

@liujiqiang999
Copy link
Author

Thanks for your instant reply. I will seriously in thinking about this question. The neural network is so amazing that some problems are difficult to understand. Such as share all the parameters for languages with few anchor points also still achieve good results.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants