-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-20928] Fix flaky test by retrying notifyCheckpointComplete until either commit success or timeout #17342
Conversation
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 7f92b13 (Thu Sep 23 14:53:20 UTC 2021) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
46d4952
to
a6fc284
Compare
a6fc284
to
32ad8a3
Compare
@flinkbot run azure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lindong28 very nice catch. I think your analysis of the cause is correct the KafkaSourceReader treats the offset committing not as mandatory which can lead to flaky tests. Your retry loop should harden the test.
@@ -73,7 +73,7 @@ public void commitOffsets( | |||
if (offsetsToCommit.isEmpty()) { | |||
return; | |||
} | |||
SplitFetcher<Tuple3<T, Long, Long>, KafkaPartitionSplit> splitFetcher = fetchers.get(0); | |||
SplitFetcher<Tuple3<T, Long, Long>, KafkaPartitionSplit> splitFetcher = getRunningFetcher(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Does this change have any effect on the fix if not maybe make the change a separate commit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain a bit more why this is a performance improvement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review @fapaul. This change does not affect the fix. I have updated the PR to remove this change.
Regarding the reason why this could improve performance, let's assume the first fetcher created by this KafkaSourceFetcherManager
has been closed and removed from fetchers
. Prior to this change, every time the commitOffsets()
is called, it will create a new SplitFetcher
just to commit the offset. If commitOffsets()
is called N times, then N SplitFetcher
will be created, which seems to be really inefficient.
In order to fix this problem, we can commit the message using any running fetcher in the fetchers
, which could be achieved by using getRunningFetcher()
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created https://issues.apache.org/jira/browse/FLINK-24398 to track this issue.
…il either commit success or timeout
32ad8a3
to
9d74252
Compare
Thank you very much for the contribution. I merged it into master. Could you please create backport PRs? |
@AHeise Thank you for helping review the PR. This PR just fixes a flaky test. Does this need to be backported? I am happy to create backport PRs. I have not done this before. Could you let me know which branches need to have this backport PR? |
@lindong28 sorry for the late response. Can you cherry-pick your commit and create a pull request against the 1.14 branch? |
I merged the backport into 1.14. According to the ticket it also affects 1.13. Can you verify that and do another backport? If not, please close the ticket. |
What is the purpose of the change
The test
KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete
is flaky according to the test failure history in FLINK-20928. This PR attempts to fix this flaky test.Brief change log
Here are the problems with the existing code that could explain why the test is flaky:
KafkaSourceReader.notifyCheckpointComplete(...)
once and expects the offset commit to be successful.KafkaSourceReader.notifyCheckpointComplete(...)
does not guarantee the offset commit to be successfully. This is because it callsKafkaConsumer.commitAsync(...)
just once and won't retry even if the commit fails with an retriable exception.This PR made the following changes to address the issues described above:
KafkaSourceReader.notifyCheckpointComplete
so that it can be called multiple times with the samecheckpointId
.CommonTestUtils.waitUtil(...)
to support user-specified sleep time. PreviouslywaitUtil(...)
hardcodes the sleep time to be 1 ms.KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete
to retryKafkaSourceReader.notifyCheckpointComplete
once per second until either the offset commit has completed or the max wait time has been reached.Verifying this change
The test
KafkaSourceReaderTest#testOffsetCommitOnCheckpointComplete
could consistently pass across 200 runs.Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: (no)Documentation