New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

12274 - Reduce sync delay for small databases #2

merged 3 commits into from Jun 27, 2011


None yet
1 participant

kocolosk commented Jun 14, 2011

It's a little hacky, but I think the end result is positive. The idea is to replicate at most 100 updates to a database shard at a time, then move on to the next shard in the queue. If the shard has pending changes waiting to be synced it gets re-inserted into the waiting queue. I also refactored the internal data structures in mem3_sync so that we could track the total number of pending changes (i.e., the backlog) as the server reaches a steady state.

A alternative design might be to have mem3_rep return {ok, PendingChanges} every time, and make that sure that PendingChanges always bubbles up to the sync server. Current approach is to exit(normal) when PendingChanges == 0. We could also make the batch size (100) and the number of batches (1) configurable, though I don't see an immediate need for it. The batch size is kept small at the moment so that the replicator makes progress even on heavily loaded shards.

kocolosk added some commits Jun 9, 2011

Exit after replicating one batch
If the replication completes in this batch, the process will exit
normally.  If not, the reason will be {pending_changes, N} and will
be logged at 'warn' level.  mem3_sync will then push the replication
back into the queue.

BugzID: 12274

@kocolosk kocolosk merged commit 7a2b66f into master Jun 27, 2011

smithsz pushed a commit that referenced this pull request Feb 11, 2015

Sam Smith
Feedback from PR #2
- Remove the ability to set batch_count 'all'
- Add an additional go/1 clause to handle batch_size = 0 case
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment