-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retrieve Kafka topic partitions offset #69
Comments
Some questions:
Possible solutions:
|
In my scenario
I would say solution 1 is preferable and makes sense to me however I don't know how much Watermill should try to be consistent across all pub/sub implementations as this change would have to be Kafka specific. |
@lebaptiste : I don't know if current kafka Go drivers allow you to query how many messages are still not consumed: IBM/sarama#489 I would suggest to keep the offset by your consumer app, if you are not using consumer group, and use this offset to start consume from specific offset (instead of beginning). |
@lebaptiste there is no requirement that Pub/Sub implementations are limited with PubSub interface, we can add some extra methods :) I'm now going to the holiday and I will be back on Monday. If you have some time you can try to experiment with adding extra methods to the existing Kafka Pub/Sub for getting these offsets. |
@lebaptiste I added Proof of Concept in There are:
What do you think? :) You can try it with using |
As commented by the Sarama author in Shopify/sarama#489 it depends on how you store offsets. IIRC Sarama does not do this out of the box. I store the offsets in Kafka __consumer_offsets manually, then calcualte a "lag" by periodically compare the offset per consumer_group:topic:partition against the OffsetNewest. I run Sarama by disabling its auto-offset commit, manually curating my offsets. |
@lebaptiste I have a service which loads a compacted topic from beginning upon start. The service blocks while loading all events from Kafka. I determine the completion of this loading process by comparing the HighWaterMarkOffsets given by a partition against the offset state in my app for the same partition. When all offsets in all partitions align, the service has caught up. In the background I run a go routine which periodically basically does the same, to keep the app in-sync with any new events from the compacted topic. I control the internal access of this mechanism with a RWLock. |
I did Proof of Concept already: #71 |
Any news on this? So it didn't work. I can also use metadata but that involved multiple type-castings from int to string and back. Any advice? Can we work on the context to make sure we pass around any values being set on the context in the Unmarshaling phase? are you going to provide built-in functions to retrieve Kafka offset information from the context? I'd be happy to work on it we can agree on something concrete. |
Hello @eafzali, I already started to implement it here: but I don't know when I will have time to finish that :) But if you want it would not require a lot of work to finish it. Basically it requires to move it to https://github.com/ThreeDotsLabs/watermill-kafka repository add some tests (as I remember it was working more or less). |
OK nice, I think I can do that :) |
@eafzali probably the best idea to make it universal, would be adding it somewhere in the router :) PR is also welcome for that! :) |
I have an API service which needs to consume a Kafka compacted topic before to be considered ready to handle any request traffic. How can I determine its readiness?
In this case, it would be best to know how many messages are still left to process before to have catched up with the latest messages on the topic. It seems there is no way to know this information currently.
I suggest the partitions offset to be retrieved at the time the consumer subscribes to a topic (from all partitions consumed). An alternative would be to include a field to messages, a flag
IsLatest
for example to let the consumer knows that a message was the last one at the time of retrieval (of course the offsets are keeping growing therefore it should not be considered as an absolute indication, it's time sensitive).The best I can do as a workaround for now is to infer it. I can use the throughput and assume I've reached the end of the topic once the number of message/sec significantly dropped (during catch up phase, the consumer processes as much messages as it can, then after only as much as "live" events currently produced on the topic).
I'll be happy to know other alternatives people might have come up with. Thanks.
The text was updated successfully, but these errors were encountered: