Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-vhost message store #567

Closed
michaelklishin opened this issue Jan 20, 2016 · 6 comments
Closed

Per-vhost message store #567

michaelklishin opened this issue Jan 20, 2016 · 6 comments
Assignees
Milestone

Comments

@michaelklishin
Copy link
Member

michaelklishin commented Jan 20, 2016

This idea has been floating around since at least late 2012: we should use separate message stores for different vhosts, e.g. two (for persistent, transient messages) per vhost.

Some of the benefits are:

Downsides:

  • Virtual host initialisation (or restart) will be more expensive. Most users don't seem to use a lot of virtual hosts, so this is a reasonable trade-off.
@michaelklishin
Copy link
Member Author

effort-high because migration of existing data can take a while to cover well with tests. Otherwise this is effort-medium.

@hairyhum
Copy link
Contributor

hairyhum commented Mar 9, 2016

What if we enable per vhost message stores for new vhosts only, and recommend to shovel messages for migration?

@michaelklishin
Copy link
Member Author

I'm afraid that's not gonna fly with hosted RabbitMQ providers, and at least some Cloud Foundry users. It would be great to automatically migrate messages on boot (if we can).

@hairyhum hairyhum self-assigned this Mar 10, 2016
@hairyhum
Copy link
Contributor

Initial research:
Thanks to rabbit_msg_store architecture enabling per-vhost grouping is pretty easy by starting message stores in subdirectories and controlling them with supervisor.
Migration of old data can still be a problem, since there is no vhost information in message records.

@hairyhum
Copy link
Contributor

hairyhum commented Apr 6, 2016

Migration mechanism proposal:

move_messages_to_vhost_store() ->
    Queues = get_durable_queues(),
    OldStore = run_old_persistent_store(),
    Migrations = spawn_for_each(fun(Queue) -> 
                                    migrate_queue(Queue, OldStore) 
                                end, Queues),
    wait_for_completion(Migrations),
    delete_old_store(OldStore).

migrate_queue(Queue, OldStore) ->
    OldStoreClient = get_client(OldStore),
    NewStore       = ensure_new_store_exist(get_vhost(Queue)),
    NewStoreClient = get_client(NewStore),
    walk_queue_index(
        fun(MessageIdInStore) ->
            Msg = get_msg_from_store(OldStoreClient),
            put_message_to_store(Msg, NewStoreClient)
        end,
        Queue).

@michaelklishin
Copy link
Member Author

@hairyhum given that upgrades perform a backup of the node data dir, that sounds reasonable.

@michaelklishin michaelklishin added this to the 3.7.0 milestone Nov 12, 2016
michaelklishin added a commit that referenced this issue Jun 23, 2017
It emits a warning and breaks compilation on OTP 20.
Originally introduced in 492e23b,
likely unintentionally.

Fixes #1272, references #567, #766.
hairyhum added a commit that referenced this issue Jul 24, 2018
Before #567 all binding recover functions were executed once on
node restart.
Executing table_filter can be expensive with high number of vhosts
and bindings and effectively should be called only once.
Moved to a separate function, which should be called from the global
recovery function (`rabbit_vhost:recover/0`)
michaelklishin pushed a commit that referenced this issue Jul 25, 2018
Before #567 all binding recover functions were executed once on
node restart.
Executing table_filter can be expensive with high number of vhosts
and bindings and effectively should be called only once.
Moved to a separate function, which should be called from the global
recovery function (`rabbit_vhost:recover/0`)

(cherry picked from commit e5f4cbd)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants