Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-18449][table sql/api]Kafka topic discovery & partition discove… #12908

Merged
merged 1 commit into from Aug 20, 2020

Conversation

fsk119
Copy link
Member

@fsk119 fsk119 commented Jul 15, 2020

…ry dynamically in table api

What is the purpose of the change

Enable Kafka Connector topic discovery & partition discovery in table api.

Brief change log

  • Expose option 'topic-pattern' and 'scan.topic-partition-discovery.interval'
  • Add validation for source when setting 'topic-pattern' and 'topic' together and setting 'topic-pattern' for sink.
  • Read value from Table option and use the value to build kafka consumer.

Verifying this change

This change added tests and can be verified as follows:

  • Added integration tests for new features
  • Added test that validates that setting topic and topic pattern together will fail and setting 'topic-pattern' for sink will fail.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 1a4e5ee (Wed Jul 15 13:11:58 UTC 2020)

✅no warnings

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 15, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@fsk119
Copy link
Member Author

fsk119 commented Jul 16, 2020

@wuchong CC

Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution @fsk119 , I left some comments.

Comment on lines 91 to 93
Pattern pattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.


@Override
protected FlinkKafkaConsumerBase<RowData> createKafkaConsumer(
Pattern pattern,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

topicPattern

Comment on lines 91 to 93
Pattern topicPattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.

Comment on lines 104 to 111
DataType outputDataType,
@Nullable List<String> topics,
@Nullable Pattern topicPattern,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition, Long> specificStartupOffsets,
long startupTimestampMillis) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.

Comment on lines 200 to 202
Pattern topicPattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.

docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
<td>optional for source(use 'topic' instead if not set)</td>
<td style="word-wrap: break-word;">(none)</td>
<td>String</td>
<td>Topic pattern from which the table is read. It will use input value to build regex expression to discover matched topics.</td>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.

docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
@fsk119 fsk119 requested a review from wuchong July 30, 2020 04:33
options.add(SCAN_STARTUP_TIMESTAMP_MILLIS);
options.add(SCAN_TOPIC_PARTITION_DISCOVERY);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate

properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,
String.valueOf(tableOptions
.getOptional(SCAN_TOPIC_PARTITION_DISCOVERY)
.map(val -> val.toMillis())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.map(val -> val.toMillis())
.map(Duration::toMillis)

));
} else {
throw new ValidationException(String.format(
errorMessageTemp, "topic-list", tableOptions.get(TOPIC)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"topic-list" -> "topic"? We don't have "topic-list" option.

@@ -139,10 +152,12 @@ public void testTableSource() {
Thread.currentThread().getContextClassLoader());

// Test scan source equals
KAFKA_SOURCE_PROPERTIES.setProperty("flink.partition-discovery.interval-millis", "1000");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it still needed? Because we have set it in static block.


private static boolean isSingleTopic(ReadableConfig tableOptions) {
// Option 'topic-pattern' is regarded as multi-topics.
return tableOptions.getOptional(TOPIC).isPresent() && tableOptions.get(TOPIC).split(",").length == 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The community recommend to use List ConfigOption for list values, framework will handle the parsing. This will also change to use ; as the separator, but this is more align with other list options. You can declare a List ConfigOption by :

	public static final ConfigOption<List<String>> TOPIC = ConfigOptions
			.key("topic")
			.stringType()
			.asList()
			.noDefaultValue()
			.withDescription("...");

Then you can call return tableOptions.getOptional(TOPIC).map(t -> t.size() == 1).orElse(false); here.

Sorry for the late reminder.

@wuchong
Copy link
Member

wuchong commented Aug 4, 2020

Btw, could you add an integration test for this?

…partition discovery for Kafka source in Table API

This closes apache#12908
Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Will merge once build is passed.

Comment on lines +62 to +63
@Nullable List<String> topics,
@Nullable Pattern topicPattern,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, it is very verbose to pass through these to parameters together here and there. An improvement is that we can use KafkaTopicsDescriptor, but this can be another issue in the future.

@wuchong wuchong merged commit b8ee51b into apache:master Aug 20, 2020
@fsk119 fsk119 deleted the FLINK-18449 branch March 17, 2021 12:03
jnh5y pushed a commit to jnh5y/flink that referenced this pull request Dec 18, 2023
…partition discovery for Kafka source in Table API

This closes apache#12908
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants