Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-35791][kafka] add database and table info of canal/debezium json format for kafka sink. #3461

Merged
merged 4 commits into from
Aug 8, 2024

Conversation

lvyanquan
Copy link
Contributor

Copy link
Contributor

@yuxiqian yuxiqian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for @lvyanquan's contribution! Just left some minor comments about JavaDocs.

@github-actions github-actions bot added the docs Improvements or additions to documentation label Jul 10, 2024
@@ -132,6 +132,17 @@ public byte[] serialize(Event event) {
}

DataChangeEvent dataChangeEvent = (DataChangeEvent) event;
reuseGenericRowData.setField(
3, StringData.fromString(dataChangeEvent.tableId().getSchemaName()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

org.apache.flink.table.data.StringData#fromString was used to generate binary data from string here, but I noticed that org.apache.flink.cdc.common.data.binary.BinaryStringData#fromString is more frequently used in CDC code base. Though they're basically the same (CDC version was copied from Flink), is it better if we can stick to one consistent binary encoding algorithm?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reuseGenericRowData will be serialized in SerializationSchema, so Flink types were passed here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for clarification.

@melin
Copy link

melin commented Aug 1, 2024

Mysql->Kafka. If transform is added, sink schemaName and tableName are obtained, which is not as expected

Copy link
Contributor

@PatrickRen PatrickRen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lvyanquan Thanks for the PR! LGTM

Could you rebase the latest master branch and run CI again? Will merge after CI passes

@lvyanquan
Copy link
Contributor Author

Done rebase.

@PatrickRen PatrickRen merged commit 9d6154f into apache:master Aug 8, 2024
21 checks passed
qiaozongmi pushed a commit to qiaozongmi/flink-cdc that referenced this pull request Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved docs Improvements or additions to documentation kafka-pipeline-connector reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants