-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ES] Spanish translation week 12, section 3 #555
Conversation
While running locally to test the changes, I found and fixed a few broken things (e.g., the french translation for week 6 had an error that prevented the build from completing at all)
docs/es/week09/09-1.md
Outdated
@@ -0,0 +1,7 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change wasn't strictly necessary for this PR, but I added it so that the numbers matched in the local preview.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stuff is going to create merge conflicts. Please, remove these boilerplates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
docs/fr/week06/06-2.md
Outdated
@@ -492,7 +492,7 @@ L'idée d'un réseau mémoire est qu'il y a deux parties importantes dans votre | |||
Pour un réseau mémoire, il y a une entrée au réseau, $x$ (pensez à cela comme une adresse de la mémoire), et comparez ce $x$ avec les vecteurs $k_1, k_2, k_3, \cdots$ ("clés") à travers un produit scalaire. En les faisant passer par une softmax, on obtient un tableau de nombres dont la somme est égale à un. Et il y a un ensemble d'autres vecteurs $v_1, v_2, v_3, \cdots$ ("valeurs"). Multipliez ces vecteurs par les scalaires provenant du softmax et additionnez ces vecteurs vous donne le résultat (notez la ressemblance avec le mécanisme d'attention). | |||
|
|||
<center> | |||
<img src="{{site.baseurl/images/week06/06-2/MemoryNetwork1.png" height="300px"/><br> | |||
<img src="{{site.baseurl}}/images/week06/06-2/MemoryNetwork1.png" height="300px"/><br> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the error preventing the build from completing locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we've fixed it already.
docs/es/week12/12-3.md
Outdated
## [Atención](https://www.youtube.com/watch?v=f01J0Dri-6k&t=69s) | ||
|
||
<!-- We introduce the concept of attention before talking about the Transformer architecture. There are two main types of attention: self attention *vs.* cross attention, within those categories, we can have hard *vs.* soft attention. --> | ||
Presentamos el concepto de "atención" antes de hablar sobre la arquitectura del Transformador. Existen dos tipos principales de atención: auto atención y atención cruzada. Dentro de esas categorías, distinguimos entre atención "ténue" y atención "intensa". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used the terms "atención ténue" and "atención intensa" for "soft attention" and "hard attention", respectively, but I still have doubts about whether we should just use the English terms. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this context, soft means "smooth", hard means "1-hot". I think suave and duro are better choices. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Will push in a bit a change for this.
docs/es/week12/12-3.md
Outdated
#### Atención Intensa | ||
|
||
<!-- With hard-attention, we impose the following constraint on the alphas: $\Vert\vect{a}\Vert_0 = 1$. This means $\vect{a}$ is a one-hot vector. Therefore, all but one of the coefficients in the linear combination of the inputs equals zero, and the hidden representation reduces to the input $\boldsymbol{x}_i$ corresponding to the element $\alpha_i=1$. --> | ||
En la atención intensa, imponemos las siguientes restricciones en las alfas: $\Vert\vect{a}\Vert_0 = 1$. Esto significa que $\vect{a}$ es un vector con codificación "one-hot". Por lo tanto, todos los coeficientes (con excepción de uno) son iguales a cero en la combinación lineal de las entradas, y la representación interna se reduce a la entrada $\boldsymbol{x}_i$ que corresponde al elemento $\alpha_i=1$. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a better translation for "one-hot encoding" in Spanish?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"uno-caliente"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd personally leave it as "one-hot". When I was taking CS back at college I remember having seen it that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @xcastilla, the term "uno-caliente" sounds very strange in Spanish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine with me. I'm not a native speaker.
|
||
|
||
<!-- ## [Key-value store](https://www.youtube.com/watch?v=f01J0Dri-6k&t=1056s) --> | ||
## [Almacén de clave/valor](https://www.youtube.com/watch?v=f01J0Dri-6k&t=1056s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced about the translation for "key-value store" that I used here. Suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you may want to discuss on Slack. I'm not a native speaker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've never heard anyone saying it in Spanish, but "Almacén de clave/valor" or "Base de datos clave/valor" seems to be right.
For reference:
https://aws.amazon.com/es/nosql/key-value/
docs/_config.yml
Outdated
- path: es/week09/09.md | ||
sections: | ||
- path: es/week09/09-1.md | ||
- path: es/week09/09-2.md | ||
- path: es/week09/09-3.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have the translation of week 9. Why did you add it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was just thinking of adding scaffolding, but you're right, it's likely to cause merge conflicts, so I removed it.
docs/es/week09/09-1.md
Outdated
@@ -0,0 +1,7 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stuff is going to create merge conflicts. Please, remove these boilerplates.
docs/es/week12/12-3.md
Outdated
## [El Transformador](https://www.youtube.com/watch?v=f01J0Dri-6k&t=2114s) | ||
|
||
<!-- Expanding on our knowledge of attention in particular, we now interpret the fundamental building blocks of the transformer. In particular, we will take a forward pass through a basic transformer, and see how attention is used in the standard encoder-decoder paradigm and compares to the sequential architectures of RNNs. --> | ||
Con el fin de expandir nuestro conocimiento sobre atención, interpretaremos ahora los bloques fundamentales del transformador. En particular, haremos un recorrido de principio a fin de un transformador básico, y veremos cómo se usa la atención en el paradigma estándar del codificador-decodificador, y cómo se compara esto con las arquitecturas secuenciales de las RNRs (Redes Neuronales Recurrentes). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question here. I've seen in the wiki for other languages that acronyms shouldn't be changed:
e.g. https://github.com/Atcold/pytorch-Deep-Learning/wiki/Italian-translation#rules
Are we following this rule in the Spanish translation as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't aware of that rule. Thanks for bringing it up. I think it makes sense to use it for consistency with other translations.
<center> | ||
<img src="{{site.baseurl}}/images/week12/12-3/figure1.png" style="zoom: 60%; background-color:#DCDCDC;" /><br> | ||
<!-- <b>Figure 1:</b> Two example diagrams of an autoencoder. The model on the left shows how an autoencoder can be design with two affine transformations + activations, where the image on the right replaces this single "layer" with an arbitrary module of operations. --> | ||
<b>Figura 1:</b> Dos diagramas ejemplificando un auto-codificador. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<b>Figura 1:</b> Dos diagramas ejemplificando un auto-codificador. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones. | |
<b>Figura 1:</b> Dos diagramas ejemplificando un autoencoder. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones. |
I saw in the translation for lesson 7-3 that autoencoder was not translated to Spanish.
We should either review lesson 7 or use the English name here to keep the notation consistent. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, I feel like "auto-codificador" is a good choice (unless we agreed to never translate names of technical terms like that). Maybe a good discussion to have in Slack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me!
As requested by @Atcold
paradigman => paradigma Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
aAgregar => agregar Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
Removes grammatical error in English version Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zxul767 and @xcastilla can you two help me with the administration of the Spanish translating group? Thanks so much! |
Done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic!
While running locally to test the changes, I found and fixed a few broken things (e.g., the French translation for week 6 had an error that prevented the build from completing at all)
I also added some scaffolding to be completed by others (myself included) in future PRs (I think it's best to avoid too much WIP.)