New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate the feasibility of using "lazy" queue mode by default #568

Open
michaelklishin opened this Issue Jan 20, 2016 · 5 comments

Comments

Projects
None yet
4 participants
@michaelklishin
Member

michaelklishin commented Jan 20, 2016

"Lazy queue mode" makes more sense for most users, judging from some of the most common issues that come up on rabbitmq-users and in other support channels.

We should investigate switching queue mode to lazy by default and after a release or so consider removing the "variable mode" altogether. The gains from it are only visible to those using transient messages only and aren't particularly significant. Years of feedback suggest that the "variable mode" is a feature that creates technical operations pain.

@hairyhum

This comment has been minimized.

Show comment
Hide comment
@hairyhum

hairyhum Jan 20, 2016

Contributor

"variable mode" - is current default mode?

Contributor

hairyhum commented Jan 20, 2016

"variable mode" - is current default mode?

@michaelklishin

This comment has been minimized.

Show comment
Hide comment
@michaelklishin

michaelklishin Jan 20, 2016

Member

Correct.

Member

michaelklishin commented Jan 20, 2016

Correct.

@tenor

This comment has been minimized.

Show comment
Hide comment
@tenor

tenor Jan 22, 2016

I don't know what the numbers are but I think users that are mostly interested in transient messaging will be unfairly penalized by a default lazy queue.

I'd like to propose a default "hybrid" mode:

  1. All queues are created as the current 'default' mode.
  2. When the backlog in a queue hits a certain size limit , then the queue is converted to a 'lazy' one.
    For non-durable queues this involves paging all current messages (a one time performance hit) and processing all new incoming messages lazily. For durable queues, all current messages will also be paged to disk to ensure non-persistent messages are also saved to disk.
    alternatively
    For both durable and non-durable queues, all current messages are still kept in the cache, but new messages (and the message that caused the cache limit to be hit) are processed lazily. The cache is never paged to disk since it never gets larger than a few MB.
  3. Once a queue has been converted to a lazy queue, it stays lazy until it is empty, then it is reconverted to a 'default' queue.

I think this proposal preserves the performance gains RabbitMQ brings to transient messaging while still meeting the goals set for lazy queueing.

tenor commented Jan 22, 2016

I don't know what the numbers are but I think users that are mostly interested in transient messaging will be unfairly penalized by a default lazy queue.

I'd like to propose a default "hybrid" mode:

  1. All queues are created as the current 'default' mode.
  2. When the backlog in a queue hits a certain size limit , then the queue is converted to a 'lazy' one.
    For non-durable queues this involves paging all current messages (a one time performance hit) and processing all new incoming messages lazily. For durable queues, all current messages will also be paged to disk to ensure non-persistent messages are also saved to disk.
    alternatively
    For both durable and non-durable queues, all current messages are still kept in the cache, but new messages (and the message that caused the cache limit to be hit) are processed lazily. The cache is never paged to disk since it never gets larger than a few MB.
  3. Once a queue has been converted to a lazy queue, it stays lazy until it is empty, then it is reconverted to a 'default' queue.

I think this proposal preserves the performance gains RabbitMQ brings to transient messaging while still meeting the goals set for lazy queueing.

@michaelklishin

This comment has been minimized.

Show comment
Hide comment
@michaelklishin

michaelklishin Jan 23, 2016

Member

@tenor this is exactly the kind of thing we are trying to avoid: current default backing queue module is just way too complicated and tries to be very smart. It falls on its ass in practice. Default queues are much dumber and much more predictable.

It's not any more difficult to switch queue mode to "variable" than it is to "lazy" right now. This change is not going to ship until Q4 2016 anyway, so we may find all kinds of improvements to either mode before that.

Member

michaelklishin commented Jan 23, 2016

@tenor this is exactly the kind of thing we are trying to avoid: current default backing queue module is just way too complicated and tries to be very smart. It falls on its ass in practice. Default queues are much dumber and much more predictable.

It's not any more difficult to switch queue mode to "variable" than it is to "lazy" right now. This change is not going to ship until Q4 2016 anyway, so we may find all kinds of improvements to either mode before that.

@tenor

This comment has been minimized.

Show comment
Hide comment
@tenor

tenor Jan 23, 2016

@michaelklishin I see your point.
Messages are committed to disk every 100ms or so. There may not be a significant performance drop for transient queues since many messages might go straight to the consumer and escape being committed to disk.

tenor commented Jan 23, 2016

@michaelklishin I see your point.
Messages are committed to disk every 100ms or so. There may not be a significant performance drop for transient queues since many messages might go straight to the consumer and escape being committed to disk.

@bdshroyer bdshroyer self-assigned this Feb 5, 2016

@michaelklishin michaelklishin changed the title from "Lazy mode" by default to Evaluate the feasibility of using "lazy" queue mode by default Feb 23, 2016

@michaelklishin michaelklishin removed this from the 3.7.0 milestone Dec 6, 2016

@michaelklishin michaelklishin modified the milestone: n/a Jan 12, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment