New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flush messages to disk in batches. #1388
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[edited]
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228
src/rabbit_variable_queue.erl
Outdated
case get(waiting_bump) of | ||
true -> ok; | ||
_ -> self() ! bump_reduce_memory_use, | ||
put(waiting_bump, waiting) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of waiting
, I think the value here should be true
otherwise the skip-case won't be hit on future calls that are waiting for the message. Also see #1393
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Missed that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was able to see both consumers and producers blocked for quite some time prior to this patch, then after applying it I did not see consumers blocked at about the same point during the perf test run.
If messages should be embedded to a queue index, there will be no credit flow limit, so message batches can be too big and block the queue process. Limiting the batch size allows consumer to make progress while publishers are blocked by the paging-out process. [#151614048]
786df6a
to
c574fc5
Compare
Here are some (fairly basic) benchmark results in a constraint environment, both with and without consumer rate limiting. When consumer throughput is really constrained, we see that publishers are throttled more aggressively but consumers are never completely blocked — stdev for consumer latency is significantly lower on this branch. On other workloads the difference is fairly small but surprisingly this branch demonstrates a slight higher overall throughput (which I wasn't expecting) and lower consumer latency (which was easier to foresee). |
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
Alternative for #1388 that does not use process dictionary. Requires rabbitmq/rabbitmq-common#228 Fix waiting_bump values
If messages should be embedded to a queue index, there will
be no credit flow limit, so message batches can be too big
and block the queue process.
Limiting the batch size allows consumer to make progress while
publishers are blocked by the paging-out process.
[#151614048]