Skip to content

Conversation

@chucheng92
Copy link
Member

@chucheng92 chucheng92 commented Mar 3, 2023

What is the purpose of the change

Fix error kafka partitionDiscoveryIntervalMs checking for noMoreNewSplits.

Brief change log

Correct partitionDiscoveryIntervalMs checking for noMoreNewSplits with <=0

Verifying this change

Add KafkaEnumeratorTest
#testRunWithDiscoverPartitionsOnceWithZeroMsToCheckNoMoreSplit,
#testRunWithDiscoverPartitionsToCheckNoMoreSplitOnce
#testRunWithPeriodicPartitionDiscoveryToCheckNoMoreSplitOnce
new cases and refine existed cases.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? docs

@flinkbot
Copy link
Collaborator

flinkbot commented Mar 3, 2023

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

@chucheng92 chucheng92 force-pushed the FLINK-31319 branch 5 times, most recently from dae8345 to 455fcec Compare March 3, 2023 20:00
…s error condition check cause not set noMoreNewPartitionSplits
@chucheng92 chucheng92 changed the title [FLINK-31319][connectors/kafka] Fix kafka partitionDiscoveryIntervalMs error condition check cause not set noMoreNewPartitionSplits [FLINK-31319][connectors/kafka] Fix kafka partitionDiscoveryIntervalMs error condition check cause not set noMoreNewPartitionSplits flag Mar 3, 2023
@chucheng92 chucheng92 changed the title [FLINK-31319][connectors/kafka] Fix kafka partitionDiscoveryIntervalMs error condition check cause not set noMoreNewPartitionSplits flag [FLINK-31319][connectors/kafka] Fix kafka partitionDiscoveryIntervalMs error condition check cause not set noMoreNewPartitionSplits Mar 3, 2023
@chucheng92
Copy link
Member Author

@flinkbot run azure

@chucheng92
Copy link
Member Author

@PatrickRen hi, qingsheng. PTAL.

@chucheng92 chucheng92 changed the title [FLINK-31319][connectors/kafka] Fix kafka partitionDiscoveryIntervalMs error condition check cause not set noMoreNewPartitionSplits [FLINK-31319][connectors/kafka] Kafka new source partitionDiscoveryIntervalMs=0 cause bounded source can not quit Mar 16, 2023
Copy link
Contributor

@PatrickRen PatrickRen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch @chucheng92 ! LGTM.

As Kafka connector will be migrated from Flink main repo to an individual repo since 1.18, could you close this PR and create a new one in the flink-connector-kafka repo? Also it'll be nice to back port this patch to 1.16 and 1.17 in Flink repo.

@chucheng92
Copy link
Member Author

Thanks for the patch @chucheng92 ! LGTM.

As Kafka connector will be migrated from Flink main repo to an individual repo since 1.18, could you close this PR and create a new one in the flink-connector-kafka repo? Also it'll be nice to back port this patch to 1.16 and 1.17 in Flink repo.

OK. thanks QingSheng.

@chucheng92
Copy link
Member Author

Thanks for the patch @chucheng92 ! LGTM.
As Kafka connector will be migrated from Flink main repo to an individual repo since 1.18, could you close this PR and create a new one in the flink-connector-kafka repo? Also it'll be nice to back port this patch to 1.16 and 1.17 in Flink repo.

OK. thanks QingSheng.

  1. PR-8(external-repo)
  2. BP-1.16
  3. BP-1.17

@PatrickRen Hi, QingSheng. Can you help to check it?

@chucheng92 chucheng92 deleted the FLINK-31319 branch March 20, 2023 03:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants