Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot建立之后,如何做log compaction? #6

Closed
pdu opened this issue Mar 12, 2018 · 4 comments
Closed

Snapshot建立之后,如何做log compaction? #6

pdu opened this issue Mar 12, 2018 · 4 comments

Comments

@pdu
Copy link

pdu commented Mar 12, 2018

关于counter的example的实现中,snapshot创建之后,是braft自动去做了log compaction吗?

如果是的话,如何保证的snapshot的时间戳和log里面的一致的?因为on_apply里面可能会有CPU heavy的操作导致application status和log差好几分钟的数据。

另外,是否只有leader创建snapshot就可以了?

@chenzhangyi
Copy link
Collaborator

  1. 自动做的
  2. snapshot和apply是串行的,做的时候会记录snapshot的时候已经apply到什么log
  3. log和状态机差距过大,说明服务严重拥塞了, 这时候得考虑限制服务的并发度.

@pdu
Copy link
Author

pdu commented Mar 13, 2018

@chenzhangyi 谢谢解答!

另外想问一下,counter的example里面on_snapshot_load是在raft node刚启动的时候调用一次,之后就不会调用了吗?为什么要限制leader不执行这个逻辑?

有一个场景是,把服务从一个IDC搬迁到另一个IDC,直接copy所有的snapshot和log到新IDC起一套raft group,leader是需要从snapshot load数据的吧?

是否只有leader会创建snapshot?

@chenzhangyi
Copy link
Collaborator

  1. 第一次是启动的时候加载数据, 和是不是leader没有关系。之后运行过程中只有数据落下太多才需要去下载和加载snapshot
  2. 服务迁移走change_peers流程, 拷贝log和snapshot是自动的, 不需要手动copy
  3. snapshot是一个单机过程, 每个节点各自做.

@pdu
Copy link
Author

pdu commented Mar 13, 2018

@chenzhangyi 谢谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants