Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Chapter 10 #86

Merged
merged 23 commits into from
Apr 21, 2022
Merged

Update Chapter 10 #86

merged 23 commits into from
Apr 21, 2022

Conversation

xuehui1991
Copy link
Contributor

No description provided.

Textbook/术语表.md Outdated Show resolved Hide resolved
- 对于采样器:策略(Policy, 即Q-Network)与环境交互,同样涉及探索与利用。但是GORILA里定义了一个Bundled Mode,即采样器的策略与学习器中实时更新的Q-Network是捆绑的。

- 对于学习器: 学习器中对于Q-Network的参数梯度会发给参数服务器。

- 对于重放缓冲区:在GORILA里分两种形式,在本地(local) 模式下就存在采样器所在的机器上;而多机(Golbal)模式下将所有的数据聚合在分布式数据库中,这样的优点是可伸缩性好,缺点是会有额外的通信开销。

- 对于参数服务器:存储Q-Network中参数的梯度(Gradient)的变化,好处是可以让Q-Network进行回滚,并且可以通过多个梯度(Gradient)来使训练过程更加稳定。在分布式环境中,不可避免的就是稳定性问题(比如节点消失、网速变慢或机器变慢)。GORILA中采用了几个策略来解决这个问题,如丢弃过旧的和损失值(Loss)太过偏离均值时的梯度。
>>>>>>> main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

@YanjieGao
Copy link
Contributor

图中可能时间作为下横轴,圆圈内为算法名更好些

@YanjieGao YanjieGao mentioned this pull request Apr 21, 2022
@YanjieGao YanjieGao merged commit 8752bbc into main Apr 21, 2022
@YanjieGao YanjieGao deleted the xuehui branch April 21, 2022 12:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants