Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clarify delivery concurrency is per client #57

Closed
wants to merge 1 commit into from
Closed

clarify delivery concurrency is per client #57

wants to merge 1 commit into from

Conversation

behrad
Copy link
Collaborator

@behrad behrad commented Jun 17, 2016

No description provided.

@coveralls
Copy link

coveralls commented Jun 17, 2016

Coverage Status

Coverage remained the same at 96.998% when pulling 4ab42d8 on behrad:clarifyConcurrency into a66ae34 on mcollina:master.

@mcollina
Copy link
Collaborator

This is not true. Delivery concurrency is for the whole broker, and the limit is on the fan in, not on the fan out.

@behrad
Copy link
Collaborator Author

behrad commented Jun 17, 2016

and how this is achieved by conn.setMaxListeners(opts.concurrency * 2) ?

@behrad behrad closed this Jun 17, 2016
@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

@mcollina Can you clarify me on this man?

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

A piece of code is probably worth more :).

conn.setMaxListeners(opts.concurrency * 2)

is needed to avoid a bad warning. Nothing more. It's set to twice the max concurrency, just because I had to account for QoS 1/2 messages.

regarding the actual concurrency limit, this is is:
https://github.com/mcollina/mqemitter/blob/master/mqemitter.js#L97-L114

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

It is also invoked in mqemitter-redis: https://github.com/mcollina/mqemitter-redis/blob/master/mqemitter-redis.js#L33.

@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

Now it makes sense, thank you :)

implementation shows the limit is number of parallel publish matchings done inside the broker, however each publish may match any number of clients :)

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

Would you mind sending a PR against the README? It might be helpful to others.

@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

implementation shows the limit is number of parallel publish matchings done inside the broker, however each publish may match any number of clients :)

and this means the max number of messages delivered concurrently may be higher than conncurrency!? in current implementation. Yes?

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

concurrency is the number of unique messages being delivered, not considering the number of clients we are delivering them to.

@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

concurrency is the number of unique messages being delivered, not considering the number of clients we are delivering them to.

But here https://github.com/mcollina/mqemitter/blob/master/mqemitter.js#L126 current is incremented per _do, despite the actual matches size!!!

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

there is no check on the matches, the check is on the message that is being published, not on the messages it is being delivered.

@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

the check is on the message that is being published, not on the messages it is being delivered.

Exactly! and hence we need to emphasize the word unique message, and may be change the word delivered to forwarded or published since delivered increases the chance for ambiguous.

@mcollina
Copy link
Collaborator

mcollina commented Jul 1, 2016

Published is a good term I think. PR?

@behrad
Copy link
Collaborator Author

behrad commented Jul 1, 2016

👍 Yup, will create one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants