Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[zh] cs-230-recurrent-neural-networks #181

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

HsinJhao
Copy link

@HsinJhao HsinJhao commented Oct 6, 2019

cs-230 recurrent neural networks translation is finished.

Change the order between translated text and <br>
@shervinea
Copy link
Owner

Thank you for all your work @HsinJhao! Please feel free to invite anyone you know to review the translation.

@shervinea shervinea added the reviewer wanted Looking for a reviewer label Oct 6, 2019
**17. [Drawbacks, Computation being slow, Difficulty of accessing information from a long time ago, Cannot consider any future input for the current state]**

&#10230;
[缺点, 计算缓慢, 难以访问长时间的历史信息, 难以考虑未来时间步的输入对当前状态的影响]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

无法考虑未来时间步的输入对当前状态的影响


&#10230;

<br>误差分析 - 当所预测得到的翻译ˆy很差时,有人会想,为什么我们没有通过执行以下错误分析得到一个好的翻译y:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

误差分析―当获得较差的预测翻译y时,可以通过执行以下错误分析来思考为什么我们没有得到好的翻译y *:

**14. where Wax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2 activation functions.**

&#10230;
其中Wax,Waa,Wya,ba是相关的系数矩阵, 在时间尺度上被整个网络共享;g1,g2是相关的激活函数。

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其中Wax,Waa,Wya,ba,by是在时间尺度上被整个网络共享系数矩阵,;g1,g2是激活函数。

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied!

**16. [Advantages, Possibility of processing input of any length, Model size not increasing with size of input, Computation takes into account historical information, Weights are shared across time]**

&#10230;
[优点, 可处理任何长度的输入, 模型大小不会随输入大小增加, 计算考虑历史信息, 权重在时间尺度上被整个网络共享]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以处理任何长度的输入,模型大小不会随输入大小的增加而增加,计算时会考虑历史信息,权重在整个时间尺度上共享

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied, thanks!

**28. Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.**

&#10230;
梯度裁剪 - 该方法是用于解决进行反向传播时时而出现梯度爆炸问题的技术。通过限制梯度的最大值, 这种现象在实际中得到了相应的控制。

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

梯度裁剪 - 一种用于解决进行反向传播时时而出现梯度爆炸问题的方法。

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consistent with the chinese version of book 《deep learning》(lan Goodfellow, et al),Gradient clipping will be translated to 梯度截断 rather than 梯度裁剪.

为与Ian Goodfellow等人编写的《deep learning》中文译本保持一致,将Gradient clipping翻译为梯度截断而不是梯度裁剪


&#10230;

<br>嵌入矩阵 - 对于给定的词汇w, 将该词汇的one-hot表示ow映射至词嵌入表示ew的嵌入矩阵E满足下式:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

??

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without changing the original intention, it will be translated to:
'' 嵌入矩阵 - 对于给定的词汇w, 通过嵌入矩阵E可将该词汇的one-hot表示向量ow映射为词嵌入表示向量ew, E满足下式:''


&#10230;

<br>bleu分数 ― 双语评估替补(bilingual evaluation understudy, bleu)分数通过基于n-gram精度计算相似度分数来量化机器翻译的好坏。其定义如下:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

双语评估替补???
通过基于n-gram精度计算相似性分数来量化机器翻译的质量。

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Through searching the related chinese information,'bilingual evaluation understudy score' will be translated to 双语评估替换分数.

通过查阅相关中文资料,将**'bilingual evaluation understudy score'翻译为双语评估替换分数**为宜.

Applied some suggestions from SpeakingTom.
Reviewed some words:
CBOW --> 连续词袋
LSTM --> 长短时记忆
Gradient clipping --> (梯度裁剪->梯度截断)...
@shervinea
Copy link
Owner

Thank you @HsinJhao and @SpeakingTom for all your work!

@HsinJhao: would it be possible to only keep the zh/cs-230-recurrent-neural-networks.md file in this PR? That way, we could immediately move forward with the merge.

@shervinea shervinea changed the title [zh] Recurrent Neural Networks [zh] cs-230-recurrent-neural-networks Oct 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reviewer wanted Looking for a reviewer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants