[FLINK-13535][kafka] do not abort transactions twice during KafkaProducer startup #9323
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is the purpose of the change
During startup of a transactional Kafka producer from previous state, we recover in two steps:
TwoPhaseCommitSinkFunction, we commit pending commit-transactions and abort pending transactions and then call intofinishRecoveringContext()FlinkKafkaProducer#finishRecoveringContext()we iterate over all recovered transaction IDs and abort them.This may lead to some transactions being worked on twice and there is quite some overhead from creating a
KafkaProducerfor each of these transactions.Brief change log
finishRecoveringContext()with a collection of all transactions thatTwoPhaseCommitSinkFunctionalready coveredFlinkKafkaProducerandFlinkKafkaProducer011to ignores transactional IDs from that setVerifying this change
This change is already covered by existing tests, such as
FlinkKafkaProducerITCaseandKafkaProducerExactlyOnceITCase.Does this pull request potentially affect one of the following parts:
@Public(Evolving): yesDocumentation