Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BEAM-6063] KafkaIO: add writing support with ProducerRecord #7052

Merged
merged 1 commit into from Nov 22, 2018

Conversation

aromanenko-dev
Copy link
Contributor

Added new transform WriteRecords based on Kafka ProducerRecord. API of old Write transform is kept as it was before but now it uses WriteRecords under the hood to write data. All internal functionality, which is not visible for user, has been changed to use ProducerRecord instead of KV.


Follow this checklist to help us incorporate your contribution quickly and easily:

  • Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

It will help us expedite review of your Pull Request if you tag someone (e.g. @username) to look at it.

Post-Commit Tests Status (on master branch)

Lang SDK Apex Dataflow Flink Gearpump Samza Spark
Go Build Status --- --- --- --- --- ---
Java Build Status Build Status Build Status Build Status Build Status Build Status Build Status Build Status
Python Build Status --- Build Status
Build Status
Build Status --- --- ---

@aromanenko-dev
Copy link
Contributor Author

R: @rangadi Please, take a look

@rangadi
Copy link
Contributor

rangadi commented Nov 16, 2018

LGTM overall.
I left a few comments. Most important one is about leaving a TODO about using API similar to PubsubIO. I think we should do that in 3.0 (next major version).

@Override
public void encode(ProducerRecord<K, V> value, OutputStream outStream) throws IOException {
stringCoder.encode(value.topic(), outStream);
intCoder.encode(value.partition() != null ? value.partition() : Integer.MAX_VALUE, outStream);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1 for partition? Any reason to chose max values for null?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think "-1" should work for partition (but not for timestamp)
Is there some better way how to deal with null values in Coder? Does NullableCoder can help here?


p.apply(mkKafkaReadTransform(numElements, new ValueAsTimestampFn()).withoutMetadata())
.apply(ParDo.of(new KV2ProducerRecord(topic)))
.setCoder(ProducerRecordCoder.of(VarIntCoder.of(), VarLongCoder.of()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to set a coder here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think yes, otherwise pipeline is not aware which coder to use for ProducerRecord's. Can we avoid this?

@aromanenko-dev
Copy link
Contributor Author

@rangadi Thank you for review! I addressed your comments, please., take a look.

Copy link
Contributor

@rangadi rangadi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a minor comment for update to TODO comment.
LGTM.

Thanks for the contribution. Please ping any committer once this passes all the test.

@@ -58,7 +58,7 @@ public ProducerRecordCoder(Coder<K> keyCoder, Coder<V> valueCoder) {
@Override
public void encode(ProducerRecord<K, V> value, OutputStream outStream) throws IOException {
stringCoder.encode(value.topic(), outStream);
intCoder.encode(value.partition() != null ? value.partition() : Integer.MAX_VALUE, outStream);
intCoder.encode(value.partition() != null ? value.partition() : -1, outStream);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your preference : either -1 or INT_MAX is fine with me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep "-1" for now

@aromanenko-dev
Copy link
Contributor Author

@rangadi I addressed your last comments and also I added a class ProducerRecordCoderTest for testing ProducerRecordCoder and updated main KafkaIO Javadoc a bit to make aware user that it's possible to use KafkaIO.writeRecords() from now.
If everything is finally ok then, I suppose, we can merge it.

@rangadi
Copy link
Contributor

rangadi commented Nov 20, 2018

👍 Looks Great To me.
Thanks for updating main JavaDoc. I completely forgot about it.,
Committers, please merge (cc: @iemejia or @lukecwik).

Copy link
Member

@iemejia iemejia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@iemejia iemejia merged commit a89226e into apache:master Nov 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants