You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prepare optimization block and PrepareContext for each parameter.
Add BlockQueue for each parameter block. The queue is used to store the gradient VariableMessage of this parameter from trainers.
Add a thread for each parameter to run optimization block.
The thread will read gradient from its BlockQueue, create a subscope to deserialize it and then use this subscope to run optimization block.
Add one thread to get parameter from the global scope for trainers.(Maybe we need a thread pool to speed up the get process. but it seems that GRPC interface can only work in one thread. Can have a test)
send_vars and read_vars from pserver without send_barrier and get_barrier.
您好,此issue在近一个月内暂无更新,我们将于今天内关闭。若在关闭后您仍需跟进提问,可重新开启此问题,我们将在24小时内回复您。因关闭带来的不便我们深表歉意,请您谅解~感谢您对PaddlePaddle的支持!
Hello, this issue has not been updated in the past month. We will close it today for the sake of other user‘s experience. If you still need to follow up on this question after closing, please feel free to reopen it. In that case, we will get back to you within 24 hours. We apologize for the inconvenience caused by the closure and thank you so much for your support of PaddlePaddle Group!
Project
https://github.com/PaddlePaddle/Paddle/projects/61
Design
Operators
Transpiler #9997
.trainer_n
suffix to gradient block in async mode.Consider
Benchmark
The text was updated successfully, but these errors were encountered: