-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extended API to support batched producer/consumer methods #53
Comments
For Spsc you cannot avoid an ordered write to the array, otherwise you will lose required ordering. You can delay it to make a bunch of elements visible but I doubt there's much to win for Spsc.
|
Yes. What you call delay is what I mean. Why do doubt an increase for spsc w.r.t. throughput? I have scenarios where 1,000-100,000 elements per sec are passed through a spsc queue. |
I recommend you test and measure rather than speculate. On current hardware the SPSC queue has been measured to deliver throughput of 350-470M messages per second. 1K - 100K is not going to be an issue. |
Batch produce/consume interfaces which leave out the batch size are harder to reason about. The issues I see are around commitment:
In either case it is find when the queue can only fulfil a part of the declared batch because the caller has committed to either being able to produce sufficient elements to fill the claimed slots or to being able to consume the full size of the batch. |
Resolved with MessagePassingQueue API |
Could we save some volatiled index and array updates when using a batch/buffered add for a producer in a high throughput context? Optimally, we could just wrap/decorate an available SpSc queue and adapt the element adding behavior.
The text was updated successfully, but these errors were encountered: