Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffer applyCh to size MaxAppendEntries #439

Closed
wants to merge 0 commits into from

Conversation

alecjen
Copy link
Contributor

@alecjen alecjen commented Feb 1, 2021

This change implements the solution suggested in #124 to enable batch processing off the raft applyCh

For context, our application uses this library, and we are currently bottlenecked by StoreLogs throughput. We believe we can boost this throughput by increasing batch size in StoreLogs, which fsync's on each call.

Appreciate all the help team 馃挴

@hashicorp-cla
Copy link

hashicorp-cla commented Feb 1, 2021

CLA assistant check
All committers have signed the CLA.

@alecjen
Copy link
Contributor Author

alecjen commented Feb 1, 2021

Actually, I'll just try to fix the race condition here by using a context and done channel, as suggested

api.go Outdated
@@ -498,7 +499,7 @@ func NewRaft(conf *Config, fsm FSM, logs LogStore, stable StableStore, snaps Sna
// Create Raft struct.
r := &Raft{
protocolVersion: protocolVersion,
applyCh: make(chan *logFuture),
applyCh: make(chan *logFuture, conf.ApplyChSize),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this now just be conf.MaxAppendEntries given that it will at most pull that many off the channel.

@alecjen alecjen changed the title Configurable applyCh size Buffer applyCh to size MaxAppendEntries Feb 1, 2021
@alecjen alecjen marked this pull request as draft February 2, 2021 05:36
@alecjen alecjen closed this Mar 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants