Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ES] Spanish translation week 12, section 3 #555

Merged
merged 11 commits into from
Sep 19, 2020
Merged

[ES] Spanish translation week 12, section 3 #555

merged 11 commits into from
Sep 19, 2020

Conversation

zxul767
Copy link
Contributor

@zxul767 zxul767 commented Sep 16, 2020

While running locally to test the changes, I found and fixed a few broken things (e.g., the French translation for week 6 had an error that prevented the build from completing at all)

I also added some scaffolding to be completed by others (myself included) in future PRs (I think it's best to avoid too much WIP.)

While running locally to test the changes, I found and fixed a few broken things (e.g.,
the french translation for week 6 had an error that prevented the build from
completing at all)
@@ -0,0 +1,7 @@
---
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change wasn't strictly necessary for this PR, but I added it so that the numbers matched in the local preview.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This stuff is going to create merge conflicts. Please, remove these boilerplates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -492,7 +492,7 @@ L'idée d'un réseau mémoire est qu'il y a deux parties importantes dans votre
Pour un réseau mémoire, il y a une entrée au réseau, $x$ (pensez à cela comme une adresse de la mémoire), et comparez ce $x$ avec les vecteurs $k_1, k_2, k_3, \cdots$ ("clés") à travers un produit scalaire. En les faisant passer par une softmax, on obtient un tableau de nombres dont la somme est égale à un. Et il y a un ensemble d'autres vecteurs $v_1, v_2, v_3, \cdots$ ("valeurs"). Multipliez ces vecteurs par les scalaires provenant du softmax et additionnez ces vecteurs vous donne le résultat (notez la ressemblance avec le mécanisme d'attention).

<center>
<img src="{{site.baseurl/images/week06/06-2/MemoryNetwork1.png" height="300px"/><br>
<img src="{{site.baseurl}}/images/week06/06-2/MemoryNetwork1.png" height="300px"/><br>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the error preventing the build from completing locally.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we've fixed it already.

## [Atención](https://www.youtube.com/watch?v=f01J0Dri-6k&t=69s)

<!-- We introduce the concept of attention before talking about the Transformer architecture. There are two main types of attention: self attention *vs.* cross attention, within those categories, we can have hard *vs.* soft attention. -->
Presentamos el concepto de "atención" antes de hablar sobre la arquitectura del Transformador. Existen dos tipos principales de atención: auto atención y atención cruzada. Dentro de esas categorías, distinguimos entre atención "ténue" y atención "intensa".
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used the terms "atención ténue" and "atención intensa" for "soft attention" and "hard attention", respectively, but I still have doubts about whether we should just use the English terms. Thoughts?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this context, soft means "smooth", hard means "1-hot". I think suave and duro are better choices. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Will push in a bit a change for this.

#### Atención Intensa

<!-- With hard-attention, we impose the following constraint on the alphas: $\Vert\vect{a}\Vert_0 = 1$. This means $\vect{a}$ is a one-hot vector. Therefore, all but one of the coefficients in the linear combination of the inputs equals zero, and the hidden representation reduces to the input $\boldsymbol{x}_i$ corresponding to the element $\alpha_i=1$. -->
En la atención intensa, imponemos las siguientes restricciones en las alfas: $\Vert\vect{a}\Vert_0 = 1$. Esto significa que $\vect{a}$ es un vector con codificación "one-hot". Por lo tanto, todos los coeficientes (con excepción de uno) son iguales a cero en la combinación lineal de las entradas, y la representación interna se reduce a la entrada $\boldsymbol{x}_i$ que corresponde al elemento $\alpha_i=1$.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a better translation for "one-hot encoding" in Spanish?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"uno-caliente"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd personally leave it as "one-hot". When I was taking CS back at college I remember having seen it that way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @xcastilla, the term "uno-caliente" sounds very strange in Spanish.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine with me. I'm not a native speaker.



<!-- ## [Key-value store](https://www.youtube.com/watch?v=f01J0Dri-6k&t=1056s) -->
## [Almacén de clave/valor](https://www.youtube.com/watch?v=f01J0Dri-6k&t=1056s)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced about the translation for "key-value store" that I used here. Suggestions?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you may want to discuss on Slack. I'm not a native speaker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never heard anyone saying it in Spanish, but "Almacén de clave/valor" or "Base de datos clave/valor" seems to be right.
For reference:
https://aws.amazon.com/es/nosql/key-value/

@zxul767 zxul767 changed the title Adds Spanish translation for week 12, section 3 [ES] Adds Spanish translation for week 12, section 3 Sep 16, 2020
@zxul767 zxul767 changed the title [ES] Adds Spanish translation for week 12, section 3 [ES] Adds Spanish translation week 12, section 3 Sep 16, 2020
@zxul767 zxul767 changed the title [ES] Adds Spanish translation week 12, section 3 [ES] Spanish translation week 12, section 3 Sep 16, 2020
docs/_config.yml Outdated
Comment on lines 283 to 287
- path: es/week09/09.md
sections:
- path: es/week09/09-1.md
- path: es/week09/09-2.md
- path: es/week09/09-3.md
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have the translation of week 9. Why did you add it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just thinking of adding scaffolding, but you're right, it's likely to cause merge conflicts, so I removed it.

@@ -0,0 +1,7 @@
---
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This stuff is going to create merge conflicts. Please, remove these boilerplates.

docs/es/week12/12-3.md Outdated Show resolved Hide resolved
## [El Transformador](https://www.youtube.com/watch?v=f01J0Dri-6k&t=2114s)

<!-- Expanding on our knowledge of attention in particular, we now interpret the fundamental building blocks of the transformer. In particular, we will take a forward pass through a basic transformer, and see how attention is used in the standard encoder-decoder paradigm and compares to the sequential architectures of RNNs. -->
Con el fin de expandir nuestro conocimiento sobre atención, interpretaremos ahora los bloques fundamentales del transformador. En particular, haremos un recorrido de principio a fin de un transformador básico, y veremos cómo se usa la atención en el paradigma estándar del codificador-decodificador, y cómo se compara esto con las arquitecturas secuenciales de las RNRs (Redes Neuronales Recurrentes).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question here. I've seen in the wiki for other languages that acronyms shouldn't be changed:
e.g. https://github.com/Atcold/pytorch-Deep-Learning/wiki/Italian-translation#rules
Are we following this rule in the Spanish translation as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't aware of that rule. Thanks for bringing it up. I think it makes sense to use it for consistency with other translations.

<center>
<img src="{{site.baseurl}}/images/week12/12-3/figure1.png" style="zoom: 60%; background-color:#DCDCDC;" /><br>
<!-- <b>Figure 1:</b> Two example diagrams of an autoencoder. The model on the left shows how an autoencoder can be design with two affine transformations + activations, where the image on the right replaces this single "layer" with an arbitrary module of operations. -->
<b>Figura 1:</b> Dos diagramas ejemplificando un auto-codificador. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<b>Figura 1:</b> Dos diagramas ejemplificando un auto-codificador. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones.
<b>Figura 1:</b> Dos diagramas ejemplificando un autoencoder. El modelo de la izquierda muestra cómo un auto-codificador se puede diseñar con dos transformaciones afines + activaciones, mientras que el modelo de la derecha reemplaza esta "capa" única con un módulo arbitrario de operaciones.

I saw in the translation for lesson 7-3 that autoencoder was not translated to Spanish.
We should either review lesson 7 or use the English name here to keep the notation consistent. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, I feel like "auto-codificador" is a good choice (unless we agreed to never translate names of technical terms like that). Maybe a good discussion to have in Slack.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me!

docs/es/week12/12-3.md Outdated Show resolved Hide resolved
docs/es/week12/12-3.md Outdated Show resolved Hide resolved
zxul767 and others added 8 commits September 17, 2020 09:47
paradigman => paradigma

Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
aAgregar => agregar

Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
Removes grammatical error in English version

Co-authored-by: Joaquim Castilla <xcastilla89@gmail.com>
Copy link
Owner

@Atcold Atcold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, don't change the website generation version.
You're going to break things this way.
Please, revert this edit.

image

@Atcold
Copy link
Owner

Atcold commented Sep 17, 2020

@zxul767 and @xcastilla can you two help me with the administration of the Spanish translating group?
You seem the most technically skilled, and I'm going a little crazy with the overhead of handling 300 people alone.

Thanks so much!

This reverts commit 638934b as requested by @Atcold
@zxul767
Copy link
Contributor Author

zxul767 commented Sep 18, 2020

Please, don't change the website generation version.
You're going to break things this way.
Please, revert this edit.

Done!

Copy link
Owner

@Atcold Atcold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic!

@Atcold Atcold merged commit 629a4af into Atcold:master Sep 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants