-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Usage with mqemiiter-redis #1
Comments
just use mqemitter-redis everywhere. This is more if you want a bunch of child_processes to communicate. |
our subscription tree is so huge, when mqemitter-redis loads the tree in qlobber (3.5G RAM) we can't afford a 3.5*cores GIG memory machine :( |
I understand: I would recommend you run a mqemitter-redis in the process master, and then you can use the If you look at the code, you can run an instance-level daemon with https://github.com/mcollina/mqemitter-cs, and exposes it on a local socket. So, your daemon will run your mqemitter-redis topic cache, and it will stays pretty stable and you can redeploy your aedes instances in the background. I'm happy to jump on a quick call to discuss. It might well be that you need something similar to the design here with the persistence layer, as its the persistence that holds all the subscription data: https://github.com/mcollina/aedes-cached-persistence/blob/master/index.js#L27. (we might even think about having those persisted in a level instance) |
it will be great 👍 Let me dig into your comment first!
Uh! My mistake!!! mqemitter-redis has no topic in-memory cache, This is the redis-persistence that is our bottleneck! |
@behrad it should be doable. Have a look at how https://github.com/mcollina/mqemitter-cs/blob/master/index.js works. The key is to use https://github.com/mcollina/tentacoli, which provides a request/response + streams multiplexing transport. |
What about the idea of caching the hole subscriptions inside filesystem (instead of memory). |
You're talking about extending |
I am talking about writing a new module that implements aedes-persistence that does that. |
I played with Am i missing something @mcollina ? We need a multi thousand per sec req/rep multiplexing. What about nanomsg-node? |
@behrad there is currently a performance regression on readable-stream 2.3, which tentacoli is using (nodejs/readable-stream#304). The number I have is around 23.5 req/ms, which we can approximate to 20000, which is close to the performance of pure HTTP in Node. In order to be faster you will need to use https://github.com/mafintosh/protocol-buffers |
Are you telling that nodejs/readable-stream#304 will improve x2000 from 0.007 req/ms to 23 req/ms !?
Why not go ipc instead? like node-ipc or even pure
Considering using native JSON (instead of msgpack) I don't think we need that, huh? |
On my box I have (with readable-stream@2.2):
Each of those runs sends 1000 messages back and forth:
You need streaming capability. req/res would not be enough, or it is a lot of work to support the API we currently have for that. |
Any reason other than back pressure support?
|
I mean something different. I think we should expose the entire aedes-persistence API over the wire, so it can be used with any real implementation. |
How that scales?
I trade off design/modularity for performance here, since I've tested underlying mosca/aedes persistence performance/issues over a year. |
Have you any example of a node HTTP server code which handles 20k/sec single process? |
Check out https://github.com/fastify/fastify. For what you want to achieve, you can get something better with protocol-buffers. It would be good to have this as a an option in aedes-cached-persistence. |
I am not getting the |
protocol-buffer is a data format. Most of the processing time related to
distributed system is due to parsing and generating the messages to send.
Il giorno mer 28 giu 2017 alle 20:09 Behrad <notifications@github.com> ha
scritto:
… however store/delete subscriptions can be done concurrently by each
process. Collecting them all inside the master, worries me. Master will
melt. considering the expensive, cpu-intesive Qlobber matching that it
should handle at the same time.
I am not getting the protocol-buffers idea in this regard, as that is a
format system !?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADL456HTN4S6HCgdfzSJL2HimygKZIeks5sIpbAgaJpZM4Nlfd0>
.
|
Yea! but we can also switch to any format, test them, after distribution interface clarified. My main concern is to only have rpc/ipc for Qlobber and let the whole persistence api work in process (local) |
👍 |
I want to share a redis mqemitter in scaled-up aedes processes @mcollina
Can you guide me how to use a redis powered mqemitter shared with child processes?
The text was updated successfully, but these errors were encountered: