Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raft: clean unstable log early #122438

Open
pav-kv opened this issue Apr 16, 2024 · 2 comments
Open

raft: clean unstable log early #122438

pav-kv opened this issue Apr 16, 2024 · 2 comments
Labels
A-kv-replication Relating to Raft, consensus, and coordination. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-kv-replication KV Replication Team

Comments

@pav-kv
Copy link
Collaborator

pav-kv commented Apr 16, 2024

Background

The unstable log structure in pkg/raft holds log entries until they have been written to storage and fsync-ed.

After the introduction of async log writes, the flow of entries from memory to Storage is:

  1. Entries are appended to unstable.
  2. On handleRaftReady, the unstable entries are extracted and paired with a MsgStorageAppend message.
  3. The batch of entries is written to Pebble.
    • If async log writes are enabled, and the batch qualifies for an async write, the batch is written to Pebble, but not synced.
    • If the write doesn't qualify for async write, the entries are written and synced.
  4. When the entries have been synced, we/Pebble invoke a callback which sends a MsgStorageAppendResp responses back to the raft instance.
  5. When handling the append response, raft removes entries from unstable 1.

Improvement

In this flow, there is a period of time (between steps 3-5) when an entry has already been written to Pebble and sits in memtables, but still resides in the unstable struct. When async writes are enabled, this can last for multiple Ready iterations. Holding these entries in unstable is not strictly necessary, because they are already readable from the log Storage. We should clear them in step (3). This will, effectively, become a "transfer" of entries from unstable to Storage.

In Replication AC, entry tokens are admitted and returned to the leader in step (3), too. Clearing the unstable entries at this point effectively includes them into the replication token "lifetime", and protects the node from OOMs caused by unstable build-ups.

The modification will be along the lines of having a new method/message to raft saying that some/all entries in unstable have been (non-durably) written, so raft can clear them. There can be some complications in the interaction with the async writes protocol.

Alternatively, we can go full on the "transfer" semantics, and remove entries from unstable when Ready returns them. We would still need to deliver "acks" to raft when entries are synced.

Jira issue: CRDB-37890

Footnotes

  1. Some entries may have already been cleared from unstable by this time, e.g. if the leadership changed and the new leader has overwritten some entries. We only remove the entries that are a guaranteed to be matched by storage, and there are no in-flight appends overwriting them. See this comment for some details.

@pav-kv pav-kv added C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) A-kv-replication Relating to Raft, consensus, and coordination. labels Apr 16, 2024
@blathers-crl blathers-crl bot added the T-kv-replication KV Replication Team label Apr 16, 2024
Copy link

blathers-crl bot commented Apr 16, 2024

cc @cockroachdb/replication

@exalate-issue-sync exalate-issue-sync bot added T-kv-replication KV Replication Team and removed T-kv-replication KV Replication Team labels Apr 16, 2024
Copy link

blathers-crl bot commented Apr 16, 2024

cc @cockroachdb/replication

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv-replication Relating to Raft, consensus, and coordination. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-kv-replication KV Replication Team
Projects
None yet
Development

No branches or pull requests

1 participant