Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flush messages to disk in batches. #1388

Merged
merged 3 commits into from Oct 23, 2017
Merged

Conversation

hairyhum
Copy link
Contributor

@hairyhum hairyhum commented Oct 10, 2017

If messages should be embedded to a queue index, there will
be no credit flow limit, so message batches can be too big
and block the queue process.
Limiting the batch size allows consumer to make progress while
publishers are blocked by the paging-out process.

[#151614048]

Copy link
Collaborator

@lukebakken lukebakken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[edited]

lukebakken added a commit to rabbitmq/rabbitmq-common that referenced this pull request Oct 12, 2017
lukebakken added a commit that referenced this pull request Oct 12, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228
case get(waiting_bump) of
true -> ok;
_ -> self() ! bump_reduce_memory_use,
put(waiting_bump, waiting)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of waiting, I think the value here should be true otherwise the skip-case won't be hit on future calls that are waiting for the message. Also see #1393

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Missed that.

Copy link
Collaborator

@lukebakken lukebakken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to see both consumers and producers blocked for quite some time prior to this patch, then after applying it I did not see consumers blocked at about the same point during the perf test run.

Daniil Fedotov added 2 commits October 18, 2017 17:35
If messages should be embedded to a queue index, there will
be no credit flow limit, so message batches can be too big
and block the queue process.
Limiting the batch size allows consumer to make progress while
publishers are blocked by the paging-out process.

[#151614048]
@michaelklishin
Copy link
Member

michaelklishin commented Oct 20, 2017

Here are some (fairly basic) benchmark results in a constraint environment, both with and without consumer rate limiting.

When consumer throughput is really constrained, we see that publishers are throttled more aggressively but consumers are never completely blocked — stdev for consumer latency is significantly lower on this branch.

On other workloads the difference is fairly small but surprisingly this branch demonstrates a slight higher overall throughput (which I wasn't expecting) and lower consumer latency (which was easier to foresee).

perf_test_runs_batched_vs_stable.zip

@gerhard
Copy link
Contributor

gerhard commented Oct 23, 2017

Great improvement!

Page out behaviour before this change:
page-out-1-3 6 12
page-out-2-3 6 12

Page out behaviour after this change:
page-out-3 6 13-dev 1388

This is what used to happen when filling & draining a queue (no paging to disk):
fill-3 6 13-dev 1388
drain-3 6 12

This is what happens now when filling & draining a queue (no paging to disk):
fill-3 6 12
drain-3 6 13-dev 1388

@gerhard gerhard merged commit b55f79d into stable Oct 23, 2017
lukebakken added a commit that referenced this pull request Nov 1, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
lukebakken added a commit to rabbitmq/rabbitmq-common that referenced this pull request Nov 1, 2017
lukebakken added a commit to rabbitmq/rabbitmq-common that referenced this pull request Nov 1, 2017
lukebakken added a commit that referenced this pull request Nov 1, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
lukebakken added a commit that referenced this pull request Nov 1, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
lukebakken added a commit that referenced this pull request Nov 1, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
lukebakken added a commit that referenced this pull request Nov 1, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
lukebakken added a commit that referenced this pull request Nov 2, 2017
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
@lukebakken lukebakken deleted the rabbitmq-server-batch-betas branch November 2, 2017 18:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants