Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modify translation #1313

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions chapter_computational-performance/multiple-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
然而,GPU的接口之间需要的密集同步可能是很难办的,特别是层之间计算的工作负载不能正确匹配的时候,
还有层之间的接口需要大量的数据传输的时候(例如:激活值和梯度,数据量可能会超出GPU总线的带宽)。
此外,计算密集型操作的顺序对拆分来说也是非常重要的,这方面的最好研究可参见 :cite:`Mirhoseini.Pham.Le.ea.2017`,其本质仍然是一个困难的问题,目前还不清楚研究是否能在特定问题上实现良好的线性缩放。
综上所述,除非存框架或操作系统本身支持将多个GPU连接在一起,否则不建议这种方法
综上所述,除非存在优秀的框架或操作系统本身支持将多个GPU连接在一起,否则不建议使用这种方法

第二种方法,拆分层内的工作。
例如,将问题分散到$4$个GPU,每个GPU生成$16$个通道的数据,而不是在单个GPU上计算$64$个通道。
Expand Down Expand Up @@ -573,4 +573,4 @@ train(num_gpus=2, batch_size=256, lr=0.2)

:begin_tab:`paddle`
[Discussions](https://discuss.d2l.ai/t/11860)
:end_tab:
:end_tab:
Loading