Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing elixir.ez in release #1

Closed
Knight1 opened this issue Mar 5, 2018 · 4 comments
Closed

Missing elixir.ez in release #1

Knight1 opened this issue Mar 5, 2018 · 4 comments

Comments

@Knight1
Copy link

Knight1 commented Mar 5, 2018

Hey :)

Nice plugin but I was forced to compile it by myself. You missed to upload the elixir.ez to the release which is a dependency ;)

{"init terminating in do_boot",{error,{missing_dependencies,[elixir],[rabbitmq_message_deduplication_exchange]}}}
init terminating in do_boot ({error,{missing_dependencies,[elixir],[rabbitmq_message_deduplication_exchange]}})

What about High Avaibility? Does this plugin handle this case?

Thank you so much
Tobias

noxdafox added a commit that referenced this issue Mar 5, 2018
The elixir-*.ez file was missing.

Thanks to @Knight1 for reporting.

Signed-off-by: Matteo Cafasso <noxdafox@gmail.com>
@noxdafox
Copy link
Owner

noxdafox commented Mar 5, 2018

Hi,

sorry for that! I just added the elixir*.ez file to the release and updated the README. Thanks for reporting!

The plugin relies on mnesia to cache the already seen messages. The table should be shared across nodes of the same cluster. If one node crashes, it will copy the table over once it restarts. I have not tested the scenario yet though (it's version 0.0.1 ;) ).

The tables are stored in memory for simplicity and performance reason. If you need a more robust mechanism what I could do is adding a configuration flag to force the cache on disk. I am not sure about how the consistency would be ensured across multiple nodes though. Some investigation would be required.

This plugin aims to be a simple yet effective way to reduce the amount of duplicate messages published to the queues. If de-duplication of the messages is critical for your architecture, I would recommend to perform the de-duplication in the consumers.

@Knight1
Copy link
Author

Knight1 commented Mar 5, 2018

Hi,

no problem :)

Yeah i depend on it because i execute on a idempotent messageID actions like kill, remove, stop, etc. Moby Containers. My best idea now is that before i execute it inside the Consumers i pull all messages with the ID and check if one is in ACK state. After successful execution i remove all other queued messages. But i guess that i always get the same messages back if i ask for new ones :(
Any idea or link?

Thanks!

@noxdafox
Copy link
Owner

noxdafox commented Mar 6, 2018

I would highly recommend against your approach as, if you have multiple consumers, it would be very hard to ensure a consistent behaviour.

The usual approach is to rely on a shared data storage (database or key-value storage) which the consumers can use to check if the newly received message was already processed beforehand. You can check this solution as reference example.

This approach in most of my Use Cases is an overkill as it requires the management of another service (I usually implement it using Redis) and a bit of extra logic on the consumers which is not that simple to get it right.

Hence the development of this plugin.

The limit of the plugin is due to the deduplication cache size. If the cache overflows, the new elements will replace the existing ones. This means that if the problem is large enough, the plugin cannot ensure all messages will be deduplicated (you would have the same limitation with the above approach tho).

Nevertheless, if the type of messages is limited (which seems to be your case) then it should just work. Your use case seems simple enough and this plugin should fit your purpose.

I can look into adding disk persistence of the cache if you need it.

@noxdafox
Copy link
Owner

I created a new ticket for the disk persistence feature, feel free to comment there if you still need it.

Closing this issue as it went out of scope.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants