Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pt] cs-230-convolutional-neural-networks #128

Merged
merged 6 commits into from
Feb 20, 2019

Conversation

leportella
Copy link
Contributor

No description provided.

@shervinea shervinea added the in progress Work in progress label Feb 12, 2019
@shervinea shervinea changed the title [WIP] [pt] Translation of CNNs tp [pt] Convolutional neural networks Feb 12, 2019
@leportella
Copy link
Contributor Author

@gabriel19913 could you check this out for me, please?

@shervinea shervinea added reviewer wanted Looking for a reviewer and removed in progress Work in progress labels Feb 15, 2019

**9. [Face verification/recognition, One shot learning, Siamese network, Triplet loss]**

⟶ [Verificação / reconhecimento facial, Aprendizado de um tiro, Rede siamesa, Perda tripla]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think "Aprendizado de um tiro" is a good translation for one shot learning, what do you think about: "Aprendizado one shot" or "Aprendizado de disparo único"?

Copy link
Contributor Author

@leportella leportella Feb 16, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed to "Aprendizado de disparo único"


**11. [Computational trick architectures, Generative Adversarial Net, ResNet, Inception Network]**

⟶ [Arquiteturas de truques computacionais, Rede Adversarial Generativa, ResNet, Rede de Iniciação]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

English is sometimes trick for portuguese speakers hahaha. What is the best: "Arquiteturas de truques computacionais" or "Truques computacionais de arquiteturas", or would they mean the same?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that the first one ("Arquiteturas de truques computacionais") was the correct translation... but there is room for both meanings I guess


**13. Architecture of a traditional CNN ― Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers:**

⟶ Arquitetura de uma RNC (CNN) - Redes neurais convolucionais, também conhecidas como CNN (em inglês), são tipos específicos de redes neurais que geralmente são compostas pelas seguintes camadas:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Arquitetura de uma tradicional RNC"


**14. The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections.**

⟶ A camada convolucional e a camadas de pooling podem ter um ajuste fino considerando os hiperparâmetros que estão descritos na próxima seção.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"nas próximas seções."


**16. Convolution layer (CONV) ― The convolution layer (CONV) uses filters that perform convolution operations as it is scanning the input I with respect to its dimensions. Its hyperparameters include the filter size F and stride S. The resulting output O is called feature map or activation map.**

⟶ Camada convolucional (CONV) - A camada convolucional (CONV) usa filtros que realizam operações de convolução conforme eles escabeuan a entrada I com relação a suas dimensões. Seus hiperparâmetros incluem o tamanho do filtro F e o passo S. O resultado O é chamado de mapa de recursos (feature map) ou mapa de ativação.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"eles escaneiam a entrada"


**59. Intersection over Union ― Intersection over Union, also known as IoU, is a function that quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding box Ba. It is defined as:**

⟶ Interseção sobre União (Intersection over Union) - Interseção sobre União, também conhecida como IoU, é uma funçãi que quantifica quão corretamente posicionado uma caixa de delimitação predita Bp está sobre a caixa de delimitação real Ba. É definida por:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

é uma função que


**61. Anchor boxes ― Anchor boxing is a technique used to predict overlapping bounding boxes. In practice, the network is allowed to predict more than one box simultaneously, where each box prediction is constrained to have a given set of geometrical properties. For instance, the first prediction can potentially be a rectangular box of a given form, while the second will be another rectangular box of a different geometrical form.**

⟶ Caixas de ancoragem (Anchor boxes) - Caixas de ancoragem é uma técnica usada para predizer caixas de delimitação que se sobrepões. Na prática, a rede tem permissão para predizer mais de uma caixa simultaneamente, oonde cada caixa prevista é restrita a ter um dado conjunto de propriedades geométricas. Por exemplo, a primeira predição pode ser potencialmente uma caixa retangular de uma determinada forma, enquanto a segunda pode ser outra caixa retangular de uma forma geométrica diferente.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delimitação que se sobrepõem.
onde cada caixa


**62. Non-max suppression ― The non-max suppression technique aims at removing duplicate overlapping bounding boxes of a same object by selecting the most representative ones. After having removed all boxes having a probability prediction lower than 0.6, the following steps are repeated while there are boxes remaining:**

⟶ Supressão não máxima (Non-max suppression) - A técnica supressão não máxima visa remover caixas de delimitação de um mesmo objeto que estão duplicadas e se sobrepõe, selecionando as mais representativas. Depois de ter removido todas as caixas que contém uma predição menor que 0.6. os seguintes passos são repetidos enquanto existem caixas remanescentes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e se sobrepõem


**66. [Step 1: Divide the input image into a G×G grid., Step 2: For each grid cell, run a CNN that predicts y of the following form:, repeated k times]**

⟶ [Passo 1: Divide a imagem de input em uma grade G×G., Passo 2: Para cada célula da grade, rode uma CNN que prevê o valor y da seguinte forma:, repita k vezes]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imagem de entrada.
roda uma CNN


**67. where pc is the probability of detecting an object, bx,by,bh,bw are the properties of the detected bouding box, c1,...,cp is a one-hot representation of which of the p classes were detected, and k is the number of anchor boxes.**

⟶ onde pc é a probabilidade de detecção do objeto, bx,by,bh,bw são as proprioedades das caixas delimitadoras detectadas, c1,...,cp é uma representação única (one-hot representation) de quais das classes p foram detectadas, e k é o número de caixas de ancoragem.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

são as propriedades

@gabriel19913
Copy link
Contributor

@gabriel19913 could you check this out for me, please?

I just did it. :)

@leportella
Copy link
Contributor Author

@gabriel19913 fixed the revision. @shervinea I think it is good to go, but can wait for Gabriel's feedback.

Also added Gabriel in the document as the revisor.

Copy link
Contributor

@gabriel19913 gabriel19913 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @leportella I forgot to review the end of the file, my bad.
After that I think we are good to go.


<br>


**73. Remark: although the original algorithm is computationally expensive and slow, newer architectures enabled the algorithm to run faster, such as Fast R-CNN and Faster R-CNN.**

&#10230;
&#10230; Observação: embora o algoritmo original seja computacionalmente caro e lento, arquiteturas mais recentes, como o Fast R-CNN e o Faster R-CNN, permitiram que o algoritmo fosse executado mais rapidamente.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you think "seja" is better than "fosse"?

Copy link
Contributor Author

@leportella leportella Feb 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 'seja' is better because even though we have newer faster algorithms, the old one is still slow and expensive, right? I think that if it was written "was", "fosse" would be better indeed


<br>


**78. One Shot Learning ― One Shot Learning is a face verification algorithm that uses a limited training set to learn a similarity function that quantifies how different two given images are. The similarity function applied to two images is often noted d(image 1,image 2).**

&#10230;
&#10230; Aprendizado de Tiro Único (One Shot Learning) - One Shot Learning é um algoritmo de verificação facial que utiliza um conjunto de treinamento limitado para aprender uma função de similaridade que quantifica o quão diferentes são as duas imagens. A função de similaridade aplicada a duas imagens é frequentemente denotada como d(imagem 1, imagem 2).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disparo único

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already changed that


<br>


**80. Triplet loss ― The triplet loss ℓ is a loss function computed on the embedding representation of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive example belong to a same class, while the negative example to another one. By calling α∈R+ the margin parameter, this loss is defined as follows:**

&#10230;
&#10230; Perda tripla (Triplet loss) - A perda tripla ℓ é uma função de perda (loss function) computada na representação da encorporação de três imagens A (âncora), P (positiva) e N (negativa). O exemplo da âncora e positivo pertencem à mesma classe, enquanto o exemplo negativo pertence a uma classe diferente. Chamando o parâmetro de margem de α∈R+, essa função de perda é calculada da seguinte forma:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definida da seguinte forma


<br>


**92. Generative Adversarial Network ― Generative adversarial networks, also known as GANs, are composed of a generative and a discriminative model, where the generative model aims at generating the most truthful output that will be fed into the discriminative which aims at differentiating the generated and true image.**

&#10230;
&#10230; Rede Adversarial Gerativa (Generative Adversarial Network) - As Generaive Adversarial Networks, também conhecidas como GANs, são compostas de um modelo generativo e um modelo discriminativo, onde o modelo generativo visa gerar a saída mais verdadeira que será alimentada na discriminativa que visa diferenciar a imagem gerada e verdadeira.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a imagem gerada e a imagem verdadeira

@leportella
Copy link
Contributor Author

@gabriel19913 now its done I think :)

@gabriel19913
Copy link
Contributor

Good job @leportella I think we are good to go @shervinea.

@shervinea
Copy link
Owner

Thank you both @leportella @gabriel19913 for your awesome work!

@shervinea shervinea merged commit 22a533a into shervinea:master Feb 20, 2019
@shervinea shervinea changed the title [pt] Convolutional neural networks [pt] cs-230-convolutional-neural-networks Oct 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reviewer wanted Looking for a reviewer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants