-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ES] Spanish translation week 12, section 2 #570
Conversation
Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
infinitive to gerund Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
infinitive to gerund Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
infinitive to gerund Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
infinitive to gerund Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
upper case Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
infinitive to gerund Co-authored-by: Willebaldo Gómez <willebaldo.gomez@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to edit docs/_config.yml
as well.
@@ -295,6 +295,9 @@ es: | |||
sections: | |||
- path: es/week11/11-1.md | |||
- path: es/week11/11-2.md | |||
- path: es/week12/12.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My PR that's almost ready to get merged includes this, if you're concerned: https://github.com/Atcold/pytorch-Deep-Learning/pull/555/files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK so I will add mine using the same header, fine?
i.e. this will be present
-path: es/week12/12.md
sections:
- path: es/week12/12-2.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My question is if it wont generate a duplicate; sorry for the trouble just learning github lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it shouldn't create a merge conflict because it's exactly the same, but even if it does, resolving it should be trivial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And how should it be resolved then?
(A) by me adding yours - path: es/week12/12-3.md
(B) by me removing all except for - path: es/week12/12-2.md
Am i being clear with my question? i.e. what do you want me to do about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically, when your PR happens to be merged first, option (A) would apply. If my PR were to be merged first, then you'd just need to be sure that nothing got duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh ok, will leave as is and keep an eye on what happens next. Thanks!
added 12-2 in config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I learned a lot by reviewing this translation, NLP is such an interesting topic!
I left several minor comments, but overall the translation looks great!
docs/es/week12/12-2.md
Outdated
|
||
<!-- The algorithm selects the best scoring hypothesis.--> | ||
|
||
El algoritmo selecciona la mejor hipótesis de puntuación. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest "...selecciona la hipótesis con mejor puntuación."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha yes it is (an interesting topic). Great bug-detection skills! And great suggestions!
All were implemented
docs/es/week12/12-2.md
Outdated
|
||
<!-- The beam tree continues until it reaches the end of sentence token. Upon outputting the end of sentence token, the hypothesis is finished.--> | ||
|
||
El árbol de haces continúa hasta que llega al final del token de oración. Al generar el final del token de oración, la hipótesis está terminada. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest "...hasta que llega al token de final de oración."? In this case, "end of sentence" is acting as an adjective for "token"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed! thanks
docs/es/week12/12-2.md
Outdated
|
||
<!-- Why (in NMT) do very large beam sizes often results in empty translations?--> | ||
|
||
¿Por qué (en NMT) los tamaños de haces muy grandes a menudo dan como resultado traslaciones vacías? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the more appropriate translation for "translations" is "traducciones" in this context
docs/_config.yml
Outdated
@@ -422,6 +429,9 @@ tr: | |||
ja: | |||
title: '深層学習' | |||
chapters: | |||
- path: ja/week03/03.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this needed for the site to compile locally?
docs/es/week12/12-2.md
Outdated
<!-- A pure sampling technique where you truncate the distribution to the $k$ best and then renormalise and sample from the distribution.--> | ||
|
||
|
||
Una técnica de muestreo pura en la que se trunca la distribución al $ k $ best y luego se vuelve a normalizar y se muestra la distribución. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe "to the K best" here refers to "the K best samples", so the translation "a los K mejores" seems more appropriate.
docs/es/week12/12-2.md
Outdated
|
||
- ¡Muchos objetivos diferentes de preentrenamiento funcionan bien! | ||
|
||
- Crucial para modelar interacciones profundas y bidireccionales entre palabras |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest replacing "Crucial para modelar..." with "Es muy importante modelar..."?
|
||
### Algunas preguntas abiertas en NLP | ||
|
||
- ¿Cómo deberíamos integrar el conocimiento mundial? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the previous PR, I suggested using "conocimiento del mundo real" instead of "conocimiento mundial", as that's what I've seen most commonly used in text for LATAM, but I'm not sure if the same applies in other Spanish-speaking regions of the world (particularly Spain)
docs/es/week12/12-2.md
Outdated
|
||
<!--- Models can learn a lot about language by predicting words in unlabelled text. This turns out to be a great unsupervised learning objective. Fine tuning for specific tasks is then easy--> | ||
|
||
- Los modelos pueden aprender mucho sobre el lenguaje al predecir palabras en texto sin etiquetar. Esto resulta ser un gran objetivo de aprendizaje sin supervisión. La puesta a punto para tareas específicas es fácil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Los modelos pueden aprender mucho sobre el lenguaje al predecir palabras en texto sin etiquetar. Esto resulta ser un gran objetivo de aprendizaje sin supervisión. La puesta a punto para tareas específicas es fácil | |
- Los modelos pueden aprender mucho sobre el lenguaje al predecir palabras en texto sin etiquetar. Esto resulta ser un gran objetivo de aprendizaje sin supervisión. La puesta a punto para tareas específicas se vuelve entonces fácil |
docs/es/week12/12-2.md
Outdated
|
||
<!--Interestingly, the lecturer (Mike Lewis, Research Scientist, FAIR) is working on a concept called ‘Grounded Language’. The aim of that field of research is to build conversational agents that are able to chit-chat or negotiate. Chit-chatting and negotiating are abstract tasks with unclear objectives as compared to text classification or text summarization.--> | ||
|
||
Curiosamente, el profesor (Mike Lewis, investigador científico, FAIR) está trabajando en un concepto llamado "Lenguaje fundamentado". El objetivo de ese campo de investigación es construir agentes conversacionales que sean capaces de charlar o negociar. Charlar y negociar son tareas abstractas con objetivos poco claros en comparación con la clasificación o el resumen del texto. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curiosamente, el profesor (Mike Lewis, investigador científico, FAIR) está trabajando en un concepto llamado "Lenguaje fundamentado". El objetivo de ese campo de investigación es construir agentes conversacionales que sean capaces de charlar o negociar. Charlar y negociar son tareas abstractas con objetivos poco claros en comparación con la clasificación o el resumen del texto. | |
Curiosamente, el profesor (Mike Lewis, investigador científico, FAIR) está trabajando en un concepto llamado "Lenguaje Fundamentado". El objetivo de ese campo de investigación es construir agentes conversacionales que sean capaces de charlar o negociar. Charlar y negociar son tareas abstractas con objetivos poco claros en comparación con la clasificación o el resumen del texto. |
docs/es/week12/12-2.md
Outdated
|
||
<!--‘World Knowledge’ is an abstract concept. We can test models, at the very basic level, for their world knowledge by asking them simple questions about the concepts we are interested in. Models like BERT, RoBERTa and T5 have billions of parameters. Considering these models are trained on a huge corpus of informational text like Wikipedia, they would have memorized facts using their parameters and would be able to answer our questions. Additionally, we can also think of conducting the same knowledge test before and after fine-tuning a model on some task. This would give us a sense of how much information the model has ‘forgotten’.--> | ||
|
||
El "conocimiento mundial" es un concepto abstracto. Podemos probar modelos, en un nivel muy básico, para su conocimiento del mundo haciéndoles preguntas simples sobre los conceptos que nos interesan. Modelos como BERT, RoBERTa y T5 tienen miles de millones de parámetros. Teniendo en cuenta que estos modelos están entrenados en un enorme corpus de texto informativo como Wikipedia, habrían memorizado hechos usando sus parámetros y podrían responder nuestras preguntas. Además, también podemos pensar en realizar la misma prueba de conocimientos antes y después de ajustar un modelo en alguna tarea. Esto nos daría una idea de cuánta información ha "olvidado" el modelo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest rephrasing "Podemos probar modelos... para su conocimiento del mundo..." as "Podemos probar el conocimiento del mundo que tienen los modelos (en un nivel muy básico)..."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great suggestions overall, and also great bug-detecting! implemented all of them, thanx!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet, thanks!
Hmm, I got a notification from the PR. |
You DID merge it and no changes happened since, so I'm as lost as you are. |
Hmm… Yes, there are plenty of chapters to review! Check out the wiki! |
Let's keep an eye on the associated videos
"beam search" --> here "búsqueda por haces" [https://www.youtube.com/watch?v=6D4EWKJgNn0&t=2732s]
"traducción inversa" [https://www.youtube.com/watch?v=6D4EWKJgNn0&t=3811s]
"entrenamiento previo para NLP" [https://www.youtube.com/watch?v=6D4EWKJgNn0&t=4963s]