Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Go Client] - Add ability to send an atomic batch #3388

Closed
emanuel-v-r opened this issue Jan 18, 2019 · 7 comments
Closed

[Go Client] - Add ability to send an atomic batch #3388

emanuel-v-r opened this issue Jan 18, 2019 · 7 comments
Labels
type/feature The PR added a new feature or issue requested a new feature
Milestone

Comments

@emanuel-v-r
Copy link

emanuel-v-r commented Jan 18, 2019

Is your feature request related to a problem? Please describe.
In this moment we can't produce/send an atomic batch throught de client.
Currently the producer batching is a buffer that is flushed according with batching configs. The problem is that we can loose the messages order, if some fails.

Describe the solution you'd like
I would like to have a method to send an atomic batch.

Describe alternatives you've considered
An alternative could be adding a flush method that gives more control to the client, having the ability to flush the buffer when desired. I dont know if it solves the problem since in concurrent requests, one could flush the messages from the other.

@emanuel-v-r emanuel-v-r added the type/feature The PR added a new feature or issue requested a new feature label Jan 18, 2019
@merlimat merlimat added this to the 2.3.0 milestone Jan 19, 2019
@merlimat
Copy link
Contributor

@imaramos

Currently the producer batching is a buffer that is flushed according with batching configs. The problem is that we can loose the messages order, if some fails.

The ordering will not be lost because the client library will internally resend all the messages from the failure point. You might see duplicates (unless you enabled deduplication feature) but you'll not get ordering violation when pipelining multiple outstanding messages.

An alternative could be adding a flush method that gives more control to the client, having the ability to flush the buffer when desired.

We already have that method in Java and C++. We just have to expose that in Python and Go wrapper still.

Eg:
Java: http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Producer.html#flush--
C++ : http://pulsar.apache.org/api/cpp/classpulsar_1_1_producer.html#a1cd59ffc4a23162eca39183ba4278146

@emanuel-v-r
Copy link
Author

@merlimat Even the duplication could be a problem, the prefered behaviour should be send everything or send nothing making it atomic and still achieving a good throughput without loosing the control over the sent messages. Dont you agree?

@tuan6956
Copy link

how about batch message ack in consumer?

@emanuel-v-r
Copy link
Author

@tuan6956 I didnt understand, can you detail please?

@tuan6956
Copy link

it mean that can i batch acknowledge message?
Ex: acknowledge message 1 ->10 in one time

@emanuel-v-r
Copy link
Author

@tuan6956 Sorry I thought you were giving a suggestion to my problem. You can achieve that with cumulative ack I assume, at least in the exclusive subscription.

@emanuel-v-r
Copy link
Author

@merlimat @wolfstudy This solves the issue regarding what I have described before? "I dont know if it solves the problem since in concurrent requests, one could flush the messages from the other."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature The PR added a new feature or issue requested a new feature
Projects
None yet
Development

No branches or pull requests

4 participants