Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discard messages for toppars that saw a seek operation during a fetch or batch processing #367

Merged
merged 11 commits into from
Jun 6, 2019
Merged

Discard messages for toppars that saw a seek operation during a fetch or batch processing #367

merged 11 commits into from
Jun 6, 2019

Conversation

JaapRood
Copy link
Collaborator

After calling consumer.seek, any new messages processed should be from the sought to offset.

Before attempting a fetch, the consumerGroup of the consumer takes into account any consumer.seek calls, fetching messages at the right offset. However, since fetching is an async operation, if consumer.seek is called while the fetch is in progress, the fetched messages are still returned for processing with consumer.run.

To fix this, we can get pretty far by filtering the messages once again after the response of a fetch is received. By checking whether the topic and partition of a batch has a pending seek operation, we know to discard the messages.

Another situation is that a batch of messages might currently be processed. To get the behaviour we're after, we'll have to check for each message whether a seek operation is pending (similar to how you check whether we're still running). In the case of eachMessage we can do this for the user, for eachBatch we expose a isStale function (like isRunning).

@JaapRood
Copy link
Collaborator Author

Typically with these kinds of issues I'd start with a test, but I've had some trouble figuring out the best way to approach this. I've tried implementing an integration test in consumeMessages.spec.js, but having trouble getting the timing nailed down to seek exactly during a fetch operation. While I'm sure I could figure this out, I thought I'd get some opinions on this before investing more time into that 😅.

@JaapRood
Copy link
Collaborator Author

Not sure how that test failure is triggered by these changes 🤔.

@tulios
Copy link
Owner

tulios commented May 22, 2019

We have a lot of integration tests; they can be flaky some times. I am re-running the broken step. I am a bit busy this week; I will try to review this as soon as I can.

@tulios tulios requested review from tulios and Nevon May 22, 2019 08:03
@JaapRood
Copy link
Collaborator Author

Meanwhile, I thought I'd add some context to how we ran into this, it might help in figuring out an integration test.

In stream processing, upon ever rebalance, there is a setup phase for each assignment taken on by the consumer. As this might take a while, consumption for that assignment is immediately paused consumer.pause, to later consumer.resume once setup is complete. A common thing we see in setup is to perform a consumer.seek, especially when the offset is stored outside of Kafka (a recommended practice as it can give you atomicity in recording offset, imagine writing results and offset to Postgres in a transaction).

This setup triggers the issue described by this pull request. Upon receiving the rebalance the consumer.pause can't come quick enough to prevent it from being fetched a first time.

@JaapRood
Copy link
Collaborator Author

JaapRood commented May 28, 2019

It took me a while, but eventually managed to implement an integration test that isolates the issue. Leveraging a higher maxWaitTimeInMs and reaching EOF on a partition, we can reach a state to call consumer.seek while a fetch is in progress. A producer writing new messages is used as the trigger to end the fetch (by satisfying minBytes), as well as to provide messages for the mechanism to discard. Without the suggested fix it fails by consuming from the new messages, with the fix it discards those and refetches from the correct offset.

@JaapRood
Copy link
Collaborator Author

Before writing the additional tests to also cover the state batches I'm keen to get an initial review. Once everyone is happy with the direction I'll get onto the other tests. Those should be easier to isolate, as we have control over the consumption loop 😅!

@tulios
Copy link
Owner

tulios commented May 28, 2019

I will try to look at it this week. :D

@@ -187,6 +187,7 @@ module.exports = class Runner {
},
uncommittedOffsets: () => this.consumerGroup.uncommittedOffsets(),
isRunning: () => this.running,
isStale: ({ topic, partition }) => this.consumerGroup.hasSeekOffset({ topic, partition }),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe isStale isn't clear enough, what do you think? I can't offer a better name now, let me think about this.

Copy link
Collaborator Author

@JaapRood JaapRood Jun 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming is always hard :/ I tried to reason from a person implementing eachBatch and how the name of the method should hint at what they can or cannot do. In this case we'd want to imply that one could continue processing, but that messages have gone outdated. Maybe isCanceled makes more sense?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sitting on isCanceled for a bit, I think it might be confusing in combination with isRunning. What would be the difference between the two? While isStale in isolation is maybe less obvious, I do think it avoids that confusion.

@tulios
Copy link
Owner

tulios commented Jun 3, 2019

@JaapRood I like the PR, the changes make sense.

@JaapRood
Copy link
Collaborator Author

JaapRood commented Jun 3, 2019

I added tests to verify both eachMessage and eachBatch can be made to discard messages correctly, the latter of which needed a fix to the isStale to pass. I tried to amend the documentation as well.

As far as I'm concerned this is ready to be merged. Pleased to hear otherwise :)

@tulios
Copy link
Owner

tulios commented Jun 6, 2019

@JaapRood yes, this PR is ready. I will merge it 🎉

@tulios tulios merged commit c06a362 into tulios:master Jun 6, 2019
@JaapRood JaapRood deleted the fix/seek-stale-batches branch June 6, 2019 21:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants