-
Notifications
You must be signed in to change notification settings - Fork 4k
Description
Describe the bug
Hello,
We have recently upgraded our RabbitMQ cluster from RabbitMQ 3.7.5 (Erlang 20.3.4) to RabbitMQ 4.1.1 (Erlang 27.3.4).
Since the upgrade, we have noticed that RabbitMQ's memory utilisation is continually growing.
Upon investigation, we have noted:
- 'rabbitmq-diagnostics memory_breakdown' indicates that 'metadata_store_ets' consumes most of the memory: 1.1684 gb (59.11%)
- 'rabbitmq-diagnostics observer' indicates the 'rabbit_khepri_topic_trie' ETS table consumes 1.0855 GB
My suspicion is that 'rabbit_khepri_topic_trie' is continually growing larger due to one of our sub-systems utilising a high-churn of routing-keys on a particular exchange.
This suspicion seems to be validated by running:
- rabbitmqctl eval 'ets:tab2list(rabbit_khepri_topic_trie).'
which indicates that most of the rows in this table are relating to exchange of the sub-system in question.
We have not made any changes to this sub-system other than upgrading RabbitMQ. Prior to the upgrade, RabbitMQ memory consumption was not continually growing.
I note that RabbitMQ 4.1.1 is using Khepri for metadata storage instead of Mnesia used in 3.7.5.
It seems to me that Khepri metadata storage is not coping with the high-churn of routing-keys like Mnesia did.
To give an idea of routing-key churn:
We have ~2500 routing keys currently applied to the exchange, but the size of the 'rabbit_khepri_topic_trie' table is '4,335,432' rows.
Reproduction steps
- Create high-churn of different routing-keys on an exchange.
- Observe rabbit_khepri_topic_trie ETS table continually grow.
Expected behavior
I would expect memory utilisation to correlate with the number of routing-keys currently applied to an exchange, rather than grow indefinitely.
Additional context
No response