Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-24228][connectors/firehose] - Unified Async Sink for Kinesis Firehose #18314

Merged
merged 3 commits into from
Feb 1, 2022

Conversation

CrynetLogistics
Copy link
Contributor

What is the purpose of the change

Allows users to write to KDF directly through this sink based on the unified sink (FLIP-141).

Brief change log

(for example:)

  • Kinesis Data Firehose datastream sink
  • Minor refactor of KDS sink components to reuse them in KDF sink

Verifying this change

Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing

(Please pick either of the following options)

This change added tests and can be verified as follows:

  • Added integration tests for end-to-end inserts into Firehose
  • Unit tests
  • Will separately have a Jira/PR for e2e tests hitting live Firehose

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)y
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)y
  • The serializers: (yes / no / don't know)n
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)n
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)n
  • The S3 file system connector: (yes / no / don't know)n

Documentation

  • Does this pull request introduce a new feature? (yes / no)y
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)docs&JavaDocs

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 0cb7e0c (Mon Jan 10 11:02:23 UTC 2022)

Warnings:

  • 4 pom.xml files were touched: Check for build and licensing issues.
  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Jan 10, 2022

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

@CrynetLogistics CrynetLogistics force-pushed the FLINK-24228 branch 3 times, most recently from 848e61a to 1bd1a6c Compare January 10, 2022 17:34
@CrynetLogistics CrynetLogistics changed the title [connectors/firehose] FLINK-24228 - Unified Async Sink for Kinesis Firehose [FLINK-24228][connectors/firehose] - Unified Async Sink for Kinesis Firehose Jan 11, 2022
@CrynetLogistics
Copy link
Contributor Author

Documentation for this work: https://issues.apache.org/jira/browse/FLINK-25692

@CrynetLogistics CrynetLogistics force-pushed the FLINK-24228 branch 2 times, most recently from 4161e82 to d782d9c Compare January 19, 2022 17:18
@CrynetLogistics
Copy link
Contributor Author

@CrynetLogistics remember to change the default maxInFlightRequests to 50 and maxBufferedRequests to something sensible.

@CrynetLogistics
Copy link
Contributor Author

image
image
image

env.execute("Integration Test");

List<S3Object> objects = listBucketObjects(s3AsyncClient, BUCKET_NAME);
assertEquals(NUMBER_OF_ELEMENTS, objects.size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update this assertion to use assertJ

Comment on lines 75 to 77
assertEquals(
testString.getBytes(StandardCharsets.US_ASCII).length,
sinkWriter.getSizeInBytes(record));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to assertJ and other instances

env.fromSequence(1, 10_000_000L)
.map(Object::toString)
.returns(String.class)
.map(data -> mapper.writeValueAsString(ImmutableMap.of("data", data)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Why not use a Json serialisation schema instead of map to string?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, really struggling to get it working... tried casting to RowData and using a JsonRowDataSerializationSchema but getting issues. I can implement my own SerializationSchema, but I feel that might distract from the main point... any help would be much appreciated...

Is it that I need to have
env.fromSequence(...).<somehow cast to RowData>; and then pass in a JsonRowDataSerializationSchema to the sink?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed, please remove this. We should not be maintaining sample code in the test package of a module. Let's raise a follow up to remove the KDS sample too

@CrynetLogistics CrynetLogistics force-pushed the FLINK-24228 branch 8 times, most recently from 1f858a4 to 909ba3b Compare January 26, 2022 18:46
Comment on lines 58 to 60
private S3AsyncClient s3AsyncClient;
private FirehoseAsyncClient firehoseAsyncClient;
private IamAsyncClient iamAsyncClient;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move these below the static fields

…based on the Async Sink Base implemented:

 - Refactored AWSUnifiedSinksUtil into a class that caters for Kinesis and Firehose
 - Extracted commonalities between KDS & KDF sinks into flink-connector-aws-base
 - Implemented integration test based on Localstack container
 - Changing host/container ports to be different, changing HTTP1.1 to being the default, localstack issue fixed
 - Added docs page, changed type in Firehose, turned logging off, removed unused dependencies.
…lization schema rather than an ElementConverter, thereby encapsulating the Firehose `Record` from the user, verifying stream objects in KinesisFirehoseITCase
@dannycranmer
Copy link
Contributor

Thanks @CrynetLogistics, LGTM, merging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants