Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add model configuration for machine translation with external memory. #36

Merged
merged 9 commits into from
Sep 13, 2017

Conversation

xinghai-sun
Copy link
Contributor

@xinghai-sun xinghai-sun commented May 9, 2017

resolve #5

The model is implemented mainly according to the paper Memory-enhanced Decoder for Neural Machine Translation, with a few minor differences (will be listed in README later). And it is also slightly different from this V1 configuration.

Besides, to avoid running into this potential bug (Issue), I put write ahead of read(upon external memory) within one recurrent step (different from the original paper) . And it seems that such a change makes no difference (equivalent) to the final model structure, and has successfully bypassed the bug.

@xinghai-sun xinghai-sun requested a review from lcy-seso May 9, 2017 14:38
@lcy-seso lcy-seso requested a review from luotao1 May 10, 2017 09:30
@lcy-seso lcy-seso self-assigned this May 10, 2017
Copy link
Collaborator

@lcy-seso lcy-seso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good job, looking forward to the documentation.

# See the License for the specific language governing permissions and
# limitations under the License.
"""
This python script is a example model configuration for neural machine
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a example --> an example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Both types of external memories are exploited to enhance the vanilla
Seq2Seq neural machine translation.

The implementation largely followers the paper
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

largely followers --> primarily follows

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

hidden_size = 1024
batch_size = 5
memory_slot_num = 8
beam_size = 40
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value of beam_size is set 40, which is overlarge. If a user does not know much about how this parameter works and does not use GPU in a generation, the example will run very slowly and requires much more memory. Besides, large beam size (according to my experience, a beam size excess 15) usually cause decline of generation performance (for example, evaluated by BLEU.)

I suggest setting beam_size to less than or equal to 5.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Just a mistake. It was "batch_size" I intended to set to 40 for experiments, not beam_size.
Done.

"""
Read head for external memory.

:param write_key: Key vector for read head to generate addressing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read head --> the read head

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Replaced with "Read from the external memory.".

# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a discussion, we decide not to include the license in python script.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

return addressing_weight
# interpolation with previous addresing weight
return self.__interpolation__(key_vector, addressing_weight)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think line 158 is a bug?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Yes! Removed.

embeddings = paddle.layer.embedding(
input=input,
size=word_vec_dim,
param_attr=paddle.attr.ParamAttr(name='_encoder_word_embedding'))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

param_attr=paddle.attr.ParamAttr(name='_encoder_word_embedding')) the parameter name is not explictly used, so such a hard code should be avoided.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Done.

input=embeddings, size=size, reverse=False)
backward = paddle.networks.simple_gru(
input=embeddings, size=size, reverse=True)
merged = paddle.layer.concat(input=[forward, backward])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merged_ + a noun seems a better name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

a span of sequence time, which is a successful design enriching the model
with capability to "remember" things in the long run. However, such a vector
state is somewhat limited to a very narrow memory bandwidth. External memory
introduced here could easily increase the memory capacity with linear
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

capacity --> the capacity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Did you mean "capability" in Line259?

- Unbounded memory for handling source language's token-wise information.
Exactly the attention mechanism over Seq2Seq.

Notice that we take the attention mechanism as a special form of external
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a special --> a particular

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

self.zero_addressing_init = paddle.layer.slope_intercept(
input=paddle.layer.fc(input=boot_layer, size=1),
slope=0.0,
intercept=0.0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any more elegant/efficient way to create such a constant layer (with its shape depending on boot_layer)? The current way will involve an unnecessary back error propagation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data layer does not have back propagation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but that might be another kind of inconvenience, since I have to prepare and pass in different dummy data layer for different ExternalMemory instances (with different sizes). Here, I hope to create it just within ExternalMemory, and no more work outside it.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 文中表述使用了很多括号,建议以正常句子方式进行表述,且只保留少量必须使用括号(解释英文)的部分,目前解释英文的部分也太多,有些没必要解释。
  2. 这篇文章需要先学习book中机器翻译那章,才能更好地明白,建议在开头说明一下。
  3. 这篇内容不是很好写,也比较抽象,但还是建议写的更加通俗易懂一点。现在全文写的偏理论,不能给用户带来直观上的感觉,即不明白加入这个外部记忆模块到底有什么好处。像这篇博文中注意力机器和神经图灵机的区别就比较形象。可以再查阅一下。
  4. 文中专有名词比较多,可以适当地隐藏一些专有名词或放到最后的讨论小节中,用户感兴趣的话,会详细阅读该部分或参考文献。

@@ -1 +1,187 @@
TBD
# Neural Machine Translation with External Memory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

全文的标题都请用中文,下同。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


带**外部存储或记忆**(External Memory)模块的神经机器翻译模型(Neural Machine Translation, NTM),是神经机器翻译模型的一个重要扩展。它利用可微分(differentiable)的外部记忆模块(其读写控制器以神经网络方式实现),来拓展神经翻译模型内部的工作记忆(Working Memory)的带宽或容量,即作为一个高效的 “外部知识库”,辅助完成翻译等任务中信息的临时存储和提取,有效提升模型效果。

该模型不仅可应用于翻译任务,同时可广泛应用于其他需要 “大容量动态记忆” 的自然语言处理和生成任务。例如:机器阅读理解 / 问答(Machine Reading Comprehension / Question Answering)、多轮对话(Multi-turn Dialog)、其他长文本生成任务等。同时,“记忆” 作为认知的重要部分之一,可被用于强化其他多种机器学习模型的表现。该示例仅基于神经机器翻译模型(单指 Seq2Seq, 序列到序列)结合外部记忆机制,起到抛砖引玉的作用,并解释 PaddlePaddle 在搭建此类模型时的灵活性。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 可被用于强化其他多种机器学习模型的表现:这句不是很通顺。可用于强化其他多种机器学习模型?
  2. 神经机器翻译模型(单指 Seq2Seq, 序列到序列),括号里的内容合适么?序列到序列也不仅有机器翻译。最后一句话也不够通顺,请修改。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Done.
  2. 考虑该句意义不大,直接删除。


本文所采用的外部记忆机制,主要指**神经图灵机**\[[1](#references)\]。值得一提的是,神经图灵机仅仅是神经网络模拟记忆机制的尝试之一。记忆机制长久以来被广泛研究,近年来在深度神经网络的背景下,涌现出一系列有意思的工作。例如:记忆网络(Memory Networks)、可微分神经计算机(Differentiable Neural Computers, DNC)等。除神经图灵机外,其他均不在本文的讨论范围内。

本文的实现主要参考论文\[[2](#references)\],但略有不同。并基于 PaddlePaddle V2 APIs。初次使用请参考PaddlePaddle [安装教程](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_cn.rst)。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“并基于 PaddlePaddle V2 APIs。初次使用请参考PaddlePaddle 安装教程。”可以删去。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


记忆(Memory),是人类(或动物)认知的重要环节之一。记忆赋予认知在时间上的协调性,使得复杂认知(不同于感知)成为可能。记忆,同样是机器学习模型需要拥有的关键能力之一。

可以说,任何机器学习模型,原生就拥有一定的记忆能力,无论它是参数模型,还是非参模型,无论是传统的 SVM(支持向量即记忆),还是神经网络模型(网络参数即记忆)。然而,这里的 “记忆” 绝大部分是指**静态记忆**,即在模型训练结束后,“记忆” 是固化的,在预测时,模型是静态一致的,不拥有额外的跨时间步的记忆能力。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

修改了一下标点:

  1. 可以说,任何机器学习模型,原生就拥有一定的记忆能力:无论它是参数模型,还是非参模型;无论是传统的 SVM(支持向量即记忆),还是神经网络模型(网络参数即记忆)。
  2. ;在预测时,模型是静态一致的,不拥有额外的跨时间步的记忆能力。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


可以说,任何机器学习模型,原生就拥有一定的记忆能力,无论它是参数模型,还是非参模型,无论是传统的 SVM(支持向量即记忆),还是神经网络模型(网络参数即记忆)。然而,这里的 “记忆” 绝大部分是指**静态记忆**,即在模型训练结束后,“记忆” 是固化的,在预测时,模型是静态一致的,不拥有额外的跨时间步的记忆能力。

#### 动态记忆 #1 --- RNNs 中的隐状态向量
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

动态记忆1: RNNs 中的隐状态向量

对应修改下面的标题

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


用户需自行完成字符的切分 (tokenize) ,并构建字典完成 ID 化。

PaddlePaddle 的接口 paddle.paddle.wmt14 ([wmt14.py](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/dataset/wmt14.py)), 默认提供了一个经过预处理的[较小规模的 wmt14 子集](http://paddlepaddle.bj.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz)。并提供了两个reader creator函数如下:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.dataset.wmt14, 默认提供了一个经过预处理、较小规模的 wmt14 子集。下同。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


命令行输入:

```Python mt_with_external_memory.py```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python-》python

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


```Python mt_with_external_memory.py```

即可运行训练脚本(默认训练一轮),训练模型将被定期保存于本地 `params.tar.gz`。训练完成后,将为少量样本生成翻译结果,详见 `infer` 函数。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

infer函数这段是不写了么

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

用户读代码可理解,暂时不用展开详述吧?
关于infer结果,目前无可用模型(依赖于Floating Exception Error Bug的fix),后续将补充可靠模型和简要generate结果。


差异如下:

1. 基于内容的寻址公式不同,原文为 $a = v^T(WM^B + Us)$, 本实现为 $a = v^T \textrm{tanh}(WM^B + Us)$,以保持和 \[[3](#references)\] 中的注意力机制寻址方式一致。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 基于内容的寻址公式不同:XXX,本示例为
  2. 有界外部存储的初始化方式不同:原文为XXX,本示例为(这里为什么是暂为?如果是暂为,是说后面还要改么)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.
去掉“暂为”。


1. Alex Graves, Greg Wayne, Ivo Danihelka, [Neural Turing Machines](https://arxiv.org/abs/1410.5401). arXiv preprint arXiv:1410.5401, 2014.
2. Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu, [Memory-enhanced Decoder Neural Machine Translation](https://arxiv.org/abs/1606.02003). arXiv preprint arXiv:1606.02003, 2016.
3. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473). arXiv preprint arXiv:1409.0473, 2014.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参考文献如果有正式发布的版本,请不要引用arXiv的内容。这里的三篇都需要做修改。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

除第2篇已修改,其余均仅在arXiv发布,未找到其他正式版本。

Copy link
Contributor Author

@xinghai-sun xinghai-sun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

非常感谢@luotao1的详细review,非常仔细,收益匪浅。已根据相关建议修改全文,包括:

  1. 根据各个review小点,多数已修改。
  2. 重新 go through 全文,改善文字可读性,如减少不必要的括号使用。
  3. 关于review summary 2: 已增加对book中MT一章的引用。
  4. 关于review summary 3:非常赞同,本文对不熟悉相关工作的读者有点晦涩,需要读者事前阅读理解相关论文。同意需要一定程度的修改以方便读者。本文的撰写初衷:不希望花费大量篇幅去阐述读者本可以轻易从论文或者其他博文中获取的背景知识,更希望放入作者自己的理解以启发读者更深的思考。并且Models和Book不同,有些内容可以轻量,有些内容可以增加深度?此外,所建议参考的博文重点阐述Memory Networks,非本文的Neural Turing Machines,并且其所阐述的MN和注意力机制的区别(句级记忆vs字符级记忆),和本文所要阐述的差异不一致。

再次感谢@luotao1 的仔细评审和建议!

@@ -1 +1,187 @@
TBD
# Neural Machine Translation with External Memory
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


带**外部存储或记忆**(External Memory)模块的神经机器翻译模型(Neural Machine Translation, NTM),是神经机器翻译模型的一个重要扩展。它利用可微分(differentiable)的外部记忆模块(其读写控制器以神经网络方式实现),来拓展神经翻译模型内部的工作记忆(Working Memory)的带宽或容量,即作为一个高效的 “外部知识库”,辅助完成翻译等任务中信息的临时存储和提取,有效提升模型效果。

该模型不仅可应用于翻译任务,同时可广泛应用于其他需要 “大容量动态记忆” 的自然语言处理和生成任务。例如:机器阅读理解 / 问答(Machine Reading Comprehension / Question Answering)、多轮对话(Multi-turn Dialog)、其他长文本生成任务等。同时,“记忆” 作为认知的重要部分之一,可被用于强化其他多种机器学习模型的表现。该示例仅基于神经机器翻译模型(单指 Seq2Seq, 序列到序列)结合外部记忆机制,起到抛砖引玉的作用,并解释 PaddlePaddle 在搭建此类模型时的灵活性。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Done.
  2. 考虑该句意义不大,直接删除。


该模型不仅可应用于翻译任务,同时可广泛应用于其他需要 “大容量动态记忆” 的自然语言处理和生成任务。例如:机器阅读理解 / 问答(Machine Reading Comprehension / Question Answering)、多轮对话(Multi-turn Dialog)、其他长文本生成任务等。同时,“记忆” 作为认知的重要部分之一,可被用于强化其他多种机器学习模型的表现。该示例仅基于神经机器翻译模型(单指 Seq2Seq, 序列到序列)结合外部记忆机制,起到抛砖引玉的作用,并解释 PaddlePaddle 在搭建此类模型时的灵活性。

本文所采用的外部记忆机制,主要指**神经图灵机**\[[1](#references)\]。值得一提的是,神经图灵机仅仅是神经网络模拟记忆机制的尝试之一。记忆机制长久以来被广泛研究,近年来在深度神经网络的背景下,涌现出一系列有意思的工作。例如:记忆网络(Memory Networks)、可微分神经计算机(Differentiable Neural Computers, DNC)等。除神经图灵机外,其他均不在本文的讨论范围内。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


本文所采用的外部记忆机制,主要指**神经图灵机**\[[1](#references)\]。值得一提的是,神经图灵机仅仅是神经网络模拟记忆机制的尝试之一。记忆机制长久以来被广泛研究,近年来在深度神经网络的背景下,涌现出一系列有意思的工作。例如:记忆网络(Memory Networks)、可微分神经计算机(Differentiable Neural Computers, DNC)等。除神经图灵机外,其他均不在本文的讨论范围内。

本文的实现主要参考论文\[[2](#references)\],但略有不同。并基于 PaddlePaddle V2 APIs。初次使用请参考PaddlePaddle [安装教程](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/getstarted/build_and_install/docker_install_cn.rst)。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


记忆(Memory),是人类(或动物)认知的重要环节之一。记忆赋予认知在时间上的协调性,使得复杂认知(不同于感知)成为可能。记忆,同样是机器学习模型需要拥有的关键能力之一。

可以说,任何机器学习模型,原生就拥有一定的记忆能力,无论它是参数模型,还是非参模型,无论是传统的 SVM(支持向量即记忆),还是神经网络模型(网络参数即记忆)。然而,这里的 “记忆” 绝大部分是指**静态记忆**,即在模型训练结束后,“记忆” 是固化的,在预测时,模型是静态一致的,不拥有额外的跨时间步的记忆能力。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


用户需自行完成字符的切分 (tokenize) ,并构建字典完成 ID 化。

PaddlePaddle 的接口 paddle.paddle.wmt14 ([wmt14.py](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/dataset/wmt14.py)), 默认提供了一个经过预处理的[较小规模的 wmt14 子集](http://paddlepaddle.bj.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz)。并提供了两个reader creator函数如下:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


```Python mt_with_external_memory.py```

即可运行训练脚本(默认训练一轮),训练模型将被定期保存于本地 `params.tar.gz`。训练完成后,将为少量样本生成翻译结果,详见 `infer` 函数。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

用户读代码可理解,暂时不用展开详述吧?
关于infer结果,目前无可用模型(依赖于Floating Exception Error Bug的fix),后续将补充可靠模型和简要generate结果。


## Experiments

To Be Added.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed. 目前依赖于Floating Exception Error Bug的fix,无实验结构,后续将补充。


差异如下:

1. 基于内容的寻址公式不同,原文为 $a = v^T(WM^B + Us)$, 本实现为 $a = v^T \textrm{tanh}(WM^B + Us)$,以保持和 \[[3](#references)\] 中的注意力机制寻址方式一致。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.
去掉“暂为”。


1. Alex Graves, Greg Wayne, Ivo Danihelka, [Neural Turing Machines](https://arxiv.org/abs/1410.5401). arXiv preprint arXiv:1410.5401, 2014.
2. Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu, [Memory-enhanced Decoder Neural Machine Translation](https://arxiv.org/abs/1606.02003). arXiv preprint arXiv:1606.02003, 2016.
3. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473). arXiv preprint arXiv:1409.0473, 2014.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

除第2篇已修改,其余均仅在arXiv发布,未找到其他正式版本。

@xinghai-sun
Copy link
Contributor Author

Thanks to @lcy-seso for the good review. All review comments were taken in commit aefd266.

Copy link
Collaborator

@lcy-seso lcy-seso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good writing, but it is still hard for most readers to follow.

__get_adressing_weight__(self, head_name, key_vector)
write(self, write_key)
read(self, read_key)
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

106行~114行,是比较重要的一块内容,可以略微重新整理一下组织的结构以便于读者理解。目前这种组织,作为读者(我)依然会有只能获得大意,但无法理解如何操作(使用)的感觉。

比如是不是可以:

私有方法

  1. __init__ (分别解释一下输入、输入、整体逻辑)
  2. __cintent_addressing__ (分别解释一下输入、输入、整体逻辑)
  3. __interpolation__ (分别解释一下输入、输入、整体逻辑)
  4. __get_addressing_weight__ (分别解释一下输入、输入、整体逻辑)

对外接口

  1. write (分别解释一下输入、输入、整体逻辑)
    • step 1. ×
    • step 2. ×
    • ……
  2. read (分别解释一下输入、输入、整体逻辑)
    • step 1. ×
    • step 2. ×
    • ……
  • 以上是对应于我自己的阅读体验,会在阅读论文之后整理成以上这样比较“模板化”的结果。
  • 可以把已有的文字内容重新调理化一下,对读者的逻辑应该会更清晰。
  • 现在有一个问题,核心的如何定义、初始化、读写 external memory,在80 ~ 83行的描述中被简化了,于是会变成:需要读者对着代码再研究一遍细节,文字描述这里,我建议加上一些读写过程的步骤。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

类结构如下:

```
class ExternalMemory(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果列出了接口,应该按照 python 的docstrings 注释一下输入输出参数和返回值。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

read(self, read_key)
```

神经图灵机的 “外部存储矩阵” 采用 `Paddle.layer.memory`实现,注意这里的`is_seq`需设成`True`,该序列的长度表示记忆槽的数量,`size` 表示记忆槽(向量)的大小。同时依赖一个外部层作为初始化, 记忆槽的数量取决于该层输出序列的长度。因此,该类不仅可用来实现有界记忆(Bounded Memory),同时可用来实现无界记忆 (Unbounded Memory,即记忆槽数量可变)。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 注意这里的is_seq需设成True --> 作为读者我会想知道什么情况下设置成 False?
  • 这一段描述,我觉得可以直接贴一段代码,对应于代码来讲,否则作为一个读者,会比较难以跟上文字描述,只能先掌握一个大概,再花时间研究代码。
  • 大段代码全部帖一遍确实也没有必要,但对文字需要去讲解的这些关键代码片段,直接贴代码可能会更清晰一些。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


`ExternalMemory`类的寻址逻辑通过 `__content_addressing__` 和 `__interpolation__` 两个私有函数实现。读和写操作通过 `read` 和 `write` 两个函数实现。并且读和写的寻址独立进行,不同于 \[[2](#参考文献)\] 中的二者共享同一个寻址强度,目的是为了使得该类更通用。

为了简单起见,控制器(Controller)未被专门模块化,而是分散在各个寻址和读写函数中,并且采用简单的前馈网络模拟控制器。读者可尝试剥离控制器逻辑并模块化,同时可尝试循环神经网络做控制器。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 这里有没有可能挑一个片段来略微地指明/讲解一下控制器,例如:代码在×.py的第×行到第×行,是控制器……
  • 我之前的经验中,NTM 文章中的read/write head ,memory,controller 对很第一次接触的同学都是抽象而陌生的,他们常问的一个问题是:read/write head ,memory,controller具体是什么?如果我们的例子可以把这些概念全部具象化。。。然后,读者自己再深入研究,应该是这个例子能做到的一个挺好的结果。
  • 算法实现这一节一定会是特别被读者关注的一节,这会是具体到代码的一节。然而,阅读之后我是会有一种感觉:这篇文章的作者明白了,但我(读者)还是没明白。因为,关键的 external memory 如何实现,读写的流程都不够具体。。如果我想要自己去操作,依然会有朦朦胧胧不知如何上手地感觉。。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


涉及三个主要函数:

```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以重新组织一下126 ~ 130行,这里列出了三个函数,最好能够按照python docstrings 给出输入输出和返回值,函数的大致逻辑。输入、返回值,功能不明确的函数,会让读者比较懵。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


然而这样的一个向量化的 $h$ 或 $c$ 的信息带宽有限。在序列到序列生成模型中,这样的带宽瓶颈更表现在信息从编码器(Encoder)转移至解码器(Decoder)的过程中:仅仅依赖一个有限长度的状态向量来编码整个变长的源语句,有着一定程度的信息丢失。

于是,注意力机制(Attention Mechanism)\[[3](#参考文献)\] 被提出,用于克服上述困难。在解码时,解码器不再仅仅依赖来自编码器的唯一的句级编码向量,而是依赖一个向量组,向量组中的每个向量为编码器的各字符(Tokens)级编码向量(状态向量),并通过一组可学习的注意强度(Attention Weights) 来动态分配注意力资源,以线性加权方式提权信息用于序列的不同位置的符号生成(可参考 PaddlePaddle Book [机器翻译](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation)一章)。这种注意强度的分布,可看成基于内容的寻址(参考神经图灵机 \[[1](#参考文献)\] 中的寻址描述),即在源语句的不同位置根据其内容获取不同的读取强度,起到一种和源语言 “软对齐(Soft Alignment)” 的作用。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 于是,注意力机制(Attention Mechanism)[3] 被提出, --> [3] 提出了注意力机制(Attention Mechanism)
  • 把被动语态换成主动语态吧,读起来会更舒服一些。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


于是,注意力机制(Attention Mechanism)\[[3](#参考文献)\] 被提出,用于克服上述困难。在解码时,解码器不再仅仅依赖来自编码器的唯一的句级编码向量,而是依赖一个向量组,向量组中的每个向量为编码器的各字符(Tokens)级编码向量(状态向量),并通过一组可学习的注意强度(Attention Weights) 来动态分配注意力资源,以线性加权方式提权信息用于序列的不同位置的符号生成(可参考 PaddlePaddle Book [机器翻译](https://github.com/PaddlePaddle/book/tree/develop/08.machine_translation)一章)。这种注意强度的分布,可看成基于内容的寻址(参考神经图灵机 \[[1](#参考文献)\] 中的寻址描述),即在源语句的不同位置根据其内容获取不同的读取强度,起到一种和源语言 “软对齐(Soft Alignment)” 的作用。

这里的 “向量组” 蕴含着更多更精准的信息,它可以被认为是一个无界的外部记忆模块(Unbounded External Memory)。“无界” 指的是向量组的向量个数非固定,而是随着源语言的字符数的变化而变化,数量不受限。在源语言的编码完成时,该外部存储即被初始化为各字符的状态向量,而在其后的整个解码过程中,只读不写(这是该机制不同于神经图灵机的地方之一)。同时,读取的过程仅采用基于内容的寻址(Content-based Addressing),而不使用基于位置的寻址(Location-based Addressing)。两种寻址方式不赘述,详见 \[[1](#参考文献)\]。当然,这两点局限不是非要如此,仅仅是传统的注意力机制如此,有待进一步的探索。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 最后一个句号,我建议用一句话描述一下三种寻址方式的核心思想,不必赘述具体的公式,但是能够很好地符合直觉理解的部分应该解释,三种寻址方式的思想应该解释,不应该被略过。
  • 原因是:
    • 这三种寻址方式下文还会出现,读者应该大致了解这三种的思想,了解他们在做什么。。。如果,略过的内容太多,也会阻碍理解。

用序号来标示内容,逻辑条理会更容易被读者跟上
1. 基于内容的寻址
2. 基于位置的寻址
3. 混合寻址

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in the next chapter.


和上述的注意力机制相比,神经图灵机有着诸多相同点和不同点。相同在于:均利用矩阵(或向量组)形式的存储,可微分的寻址方式。不同在于:神经图灵机有读有写(是真正意义上的存储器),并且其寻址不仅限于基于内容的寻址,同时结合基于位置的寻址(使得例如 “长序列复制” 等需要 “连续寻址” 的任务更容易),此外它是有界的(Bounded);而注意机制仅仅有读操作,无写操作,并且仅基于内容寻址,此外它是无界的(Unbounded)。

#### 三种记忆混合,强化神经机器翻译模型
Copy link
Collaborator

@lcy-seso lcy-seso May 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这一节,说一点我的阅读感受:逻辑不够清晰,我需要努力去理这一节的逻辑思路。

  1. 动态记忆 1 ~ 3 是一个递进关系,这一节“三种记忆混合,强化神经机器翻译模型” 是对 “动态记忆 3 --- 神经图灵机” 的总结,引入最终的解决方案吗?

    • 如果是,变成三级标题,让逻辑思路更清晰而粗暴。
    • 从三中动态记忆到这一节,逻辑没有形成,我在努力地去构筑行文的逻辑线索,这里依然需要一个逻辑过度来对读者的思路进行引导。
  2. 我的习惯是会在阅读的过程中做一个整理:把大段的文字提重新整理成一问一答的形式,多个一问一答之间应该存在逻辑的递进关系:

    • 这一节的三段我没有形成简单清晰,且存在逻辑递进关系的一问一答。。我会努力的问自己,作者在向我解释什么。。
    • 这一节给我的感觉是,作者在跟我讨论 external memory,但是讨论了几个问题?每个问题的答案是什么?这些问题是零散的点呢?还是共同构筑成较为系统化的理论?我需要自己努力地去重新整理,逻辑不够简单粗地被get到。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这一节不是对“动态记忆 3 --- 神经图灵机” 的总结,而是提出“本模型是混合了上述三种记忆方式,以及为什么要这么做”。修改版将问题专门提出,然后分点给出解释,可能略清晰一些。


尽管在一般的序列到序列模型中,注意力机制已经是标配。然而,注意机制的外部存储仅仅是用于存储源语言的字符级信息。在解码器内部,信息通路仍然是依赖于 RNN 的状态单向量 $h$ 或 $c$。于是,利用神经图灵机的外部存储机制,来补充解码器内部的单向量信息通路,成为自然而然的想法。

当然,我们也可以仅仅通过扩大 $h$ 或 $c$的维度来扩大信息带宽,然而,这样的扩展是以 $O(n^2)$ 的存储和计算复杂度为代价(状态-状态转移矩阵)。而基于神经图灵机的记忆扩展代价是 $O(n)$的,因为寻址是以记忆槽(Memory Slot)为单位,而控制器的参数结构仅仅是和 $m$(记忆槽的大小)有关。另外值得注意的是,尽管矩阵拉长了也是向量,但基于状态单向量的记忆读取和写入机制,本质上是**全局**的;而神经图灵机的机制是局部的,即读取和写入本质上只在部分记忆槽(尽管实际上是全局写入,但是寻址强度的分布是很锐利的,即真正大的强度仅分布于部分记忆槽),因而可以认为是**局部**的。局部的特性让记忆的存取更干净,干扰更小。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果这一段要说明两个问题,请用序号给读者提示。。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


当然,我们也可以仅仅通过扩大 $h$ 或 $c$的维度来扩大信息带宽,然而,这样的扩展是以 $O(n^2)$ 的存储和计算复杂度为代价(状态-状态转移矩阵)。而基于神经图灵机的记忆扩展代价是 $O(n)$的,因为寻址是以记忆槽(Memory Slot)为单位,而控制器的参数结构仅仅是和 $m$(记忆槽的大小)有关。另外值得注意的是,尽管矩阵拉长了也是向量,但基于状态单向量的记忆读取和写入机制,本质上是**全局**的;而神经图灵机的机制是局部的,即读取和写入本质上只在部分记忆槽(尽管实际上是全局写入,但是寻址强度的分布是很锐利的,即真正大的强度仅分布于部分记忆槽),因而可以认为是**局部**的。局部的特性让记忆的存取更干净,干扰更小。

所以,在该示例的实现中,RNN 原有的状态向量和注意力机制被保留;同时,基于简化版的神经图灵机的有界外部记忆机制被引入以补充解码器单状态向量记忆。整体的模型实现参考论文\[[2](#参考文献)\],但有少量差异,详见[其他讨论](#其他讨论)一章。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

所以,是一种因果的逻辑,我从前两段,没有明白这个“所以”是如何得出的。。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 关于图片,重绘的事情不确定,但即使重绘,里面的英文单词也得先换成中文的,重绘的同学搞不定这个。
  2. 括号使用还是可以减少一点。

我们注意到,LSTM 中的细胞状态向量 $c$ 的引入,或者 GRU 中状态向量 $h$ 的以门(Gate)控制的线性跨层结构(Leaky Unit)的引入,从优化的角度看有着不同的理解:即为了梯度计算中各时间步的一阶偏导矩阵(雅克比矩阵)的谱分布更接近单位阵,以减轻长程梯度衰减问题,降低优化难度。但这不妨碍我们从直觉的角度将它理解为增加 “线性通路” 使得 “记忆通道” 更顺畅,如图1(引自[此文](http://colah.github.io/posts/2015-08-Understanding-LSTMs/))所示的 LSTM 中的细胞状态向量 $c$ 可视为这样一个用于信息持久化的 “线性记忆通道”。

<div align="center">
<img src="image/lstm_c_state.png" width=700><br/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

图1没上传吧。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


#### 动态记忆 3 --- 神经图灵机

图灵机(Turing Machines)或冯诺依曼体系(Von Neumann Architecture),是计算机体系结构的雏形。运算器(如代数计算)、控制器(如逻辑分支控制)和存储器三者一体,共同构成了当代计算机的核心运行机制。神经图灵机(Neural Turing Machines)\[[1](#参考文献)\] 试图利用神经网络模型模拟可微分(于是可通过梯度下降来学习)的图灵机,以实现更复杂的智能。而一般的机器学习模型,大部分忽略了显式存储。神经图灵机正是要弥补这样的潜在缺陷。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

于是可通过梯度下降来学习:为什么要加上“于是”?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Copy link
Collaborator

@lcy-seso lcy-seso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost LGTM.

optimizer = paddle.optimizer.Adam(
learning_rate=5e-5,
gradient_clipping_threshold=5,
regularization=paddle.optimizer.L2Regularization(rate=8e-4))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

90 ~ 93 行放在67行,网络定义之前吧。 develop 分支的这个bug PaddlePaddle/Paddle#2621 还没有修复。如果用较新的代码训练,正则和梯度截断可能都不会起作用。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

input=decoder_result, label=target)
return cost
else:
target_embeddings = paddle.layer.GeneratedInputV2(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的 GeneratedInputV2 改为 GeneratedInput 吧。develop 分支在这个PR PaddlePaddle/Paddle#2288 之后,GeneratedInputStaticInput 都已经再没有 V2后缀了。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


#### 动态记忆 2 --- Seq2Seq 中的注意力机制

然而上节所属的单个向量 $h$ 或 $c$ 的信息带宽有限。在序列到序列生成模型中,这样的带宽瓶颈更表现在信息从编码器(Encoder)转移至解码器(Decoder)的过程中:仅仅依赖一个有限长度的状态向量来编码整个变长的源语句,有着较大的潜在信息丢失。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 然而上节所属 --> 然而上节所述

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


该类结构如下:

```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

```python,会增加语法高亮显示。

- 输入参数 `name`: 外部记忆单元名,不同实例的相同命名将共享同一外部记忆单元。
- 输入参数 `mem_slot_size`: 单个记忆槽(向量)的维度。
- 输入参数 `boot_layer`: 用于内存槽初始化的层。需为序列类型,序列长度表明记忆槽的数量。
- 输入参数 `readonly`: 是否打开只读模式(例如打开只读模式,该实例可用于注意力机制)。打开是,`write` 方法不可被调用。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

打开是 --> 打开时

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

import random
import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import *
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from model import memory_enhanced_seq2seq

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import *
from data_utils import *
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from data_utils import reader_append_wrapper

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

import gzip
import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import *
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from model import memory_enhanced_seq2seq

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

import paddle.v2 as paddle
from external_memory import ExternalMemory
from model import *
from data_utils import *
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from data_utils import reader_append_wrapper

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

"""
import distutils.util
import argparse
import gzip
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

第6行之后空一行吧。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor Author

@xinghai-sun xinghai-sun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks for the reviews!


#### 动态记忆 2 --- Seq2Seq 中的注意力机制

然而上节所属的单个向量 $h$ 或 $c$ 的信息带宽有限。在序列到序列生成模型中,这样的带宽瓶颈更表现在信息从编码器(Encoder)转移至解码器(Decoder)的过程中:仅仅依赖一个有限长度的状态向量来编码整个变长的源语句,有着较大的潜在信息丢失。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


#### 动态记忆 3 --- 神经图灵机

图灵机(Turing Machines)或冯诺依曼体系(Von Neumann Architecture),是计算机体系结构的雏形。运算器(如代数计算)、控制器(如逻辑分支控制)和存储器三者一体,共同构成了当代计算机的核心运行机制。神经图灵机(Neural Turing Machines)\[[1](#参考文献)\] 试图利用神经网络模拟可微分(即可通过梯度下降来学习)的图灵机,以实现更复杂的智能。而一般的机器学习模型,大部分忽略了显式的动态存储。神经图灵机正是要弥补这样的潜在缺陷。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

类结构如下:

```
class ExternalMemory(object):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

- 输入参数 `name`: 外部记忆单元名,不同实例的相同命名将共享同一外部记忆单元。
- 输入参数 `mem_slot_size`: 单个记忆槽(向量)的维度。
- 输入参数 `boot_layer`: 用于内存槽初始化的层。需为序列类型,序列长度表明记忆槽的数量。
- 输入参数 `readonly`: 是否打开只读模式(例如打开只读模式,该实例可用于注意力机制)。打开是,`write` 方法不可被调用。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


部分关键实现逻辑:

- 神经图灵机的 “外部存储矩阵” 采用 `Paddle.layer.memory`实现,并采用序列形式(`is_seq=True`),该序列的长度表示记忆槽的数量,序列的 `size` 表示记忆槽(向量)的大小。该序列依赖一个外部层作为初始化, 其记忆槽的数量取决于该层输出序列的长度。因此,该类不仅可用来实现有界记忆(Bounded Memory),同时可用来实现无界记忆 (Unbounded Memory,即记忆槽数量可变)。
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


命令行输入:

```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

```
或自定义部分参数, 例如:

```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

--word_vec_dim 512 \
--hidden_size 1024 \
--memory_slot_num 8 \
--use_gpu True \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

--hidden_size 1024 \
--memory_slot_num 8 \
--use_gpu True \
--trainer_count 4 \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

--hidden_size 1024 \
--memory_slot_num 8 \
--use_gpu True \
--trainer_count 4 \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Collaborator

@lcy-seso lcy-seso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lcy-seso lcy-seso merged commit 717ccf5 into PaddlePaddle:develop Sep 13, 2017
wojtuss pushed a commit to wojtuss/models that referenced this pull request Mar 4, 2019
Lexical Analysis for Chinese (LAC) model
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

example configuration for neural machine translation with external memory.
3 participants