Skip to content

update 1.2 with pr 548 bug fix #563

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 22, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 3 additions & 6 deletions doc/fluid/api_cn/layers_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9638,8 +9638,7 @@ has_inf
参数:
- **x(variable)** - 用于被检查的Tensor/LoDTensor

返回:
tensor变量存储输出值,包含一个bool型数值
返回: tensor变量存储输出值,包含一个bool型数值

返回类型:Variable

Expand Down Expand Up @@ -9667,8 +9666,7 @@ has_nan
参数:
- **x(variable)** - 用于被检查的Tensor/LoDTensor

返回:
tensor变量存储输出值,包含一个bool型数值
返回: tensor变量存储输出值,包含一个bool型数值

返回类型:Variable

Expand All @@ -9690,8 +9688,7 @@ isfinite
参数:
- **x(variable)** - 用于被检查的Tensor/LoDTensor

返回:
Variable: tensor变量存储输出值,包含一个bool型数值
返回: Variable: tensor变量存储输出值,包含一个bool型数值

返回类型:Variable

Expand Down
2 changes: 1 addition & 1 deletion doc/fluid/user_guides/howto/training/cluster_howto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@

Fluid分布式任务可以支持同步训练或异步训练,在同步训练方式下,所有的trainer节点,会在每个mini-batch
同步地合并所有节点的梯度数据并发送给parameter server完成更新,在异步训练方式下,每个trainer没有相互\
同步等待的过程,可以独立的parameter server的参数。通常情况下,使用异步训练方式,可以在trainer节点\
同步等待的过程,可以独立地更新parameter server的参数。通常情况下,使用异步训练方式,可以在trainer节点\
更多的时候比同步训练方式有更高的总体吞吐量。

在调用 :code:`transpile` 函数时,默认会生成同步训练的分布式程序,通过指定 :code:`sync_mode=False`
Expand Down