-
-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assign partitions based on lag #25
Comments
One thing I'm not sure about is how to calculate the offset lag per partition. Maybe I'm missing something, but I don't see a way to get this using ListOffset, etc. |
You will have to use |
But that just gives you the group offset, but to calculate the lag you'd also need the highwater mark and the starting offset. |
It's a combination of listOffset and offsetFetch |
I have a PoC implementation now that seems to work. I'll create a PR with it as soon as I've dealt with some error cases, and get some feedback. The assignment algorithm itself seems to produce balanced assignments. It's mostly calculating the offset lag that's not the nicest at the moment. I'm not sure it makes sense to bundle it with the main distribution of KafkaJS though, as for most people the round-robin assigner would be fine. Since the assigner is pluggable anyway, I think it might make more sense to publish it as a separate module. |
In cases where the offset lag is unevenly spread across partitions, it makes sense to assign partitions across consumers based on the current offset lag, rather than completely round robin.
Essentially:
1.1. Calculate the offset lag for each partition in the topic
1.2. Sort the partitions by offset lag in descending order
1.3. For each partition
1.3.1. Assign the partition to the consumer with the lowest number of assigned partitions (across all topics)
1.3.2. If two or more consumers have the same number of assigned partitions, assign the partition to the one where the sum of the offset lag for the assigned partitions is the lowest
The text was updated successfully, but these errors were encountered: