Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-12017][table-runtime-blink] Introduce Rank and Deduplicate operators for blink streaming runtime #8109

Closed
wants to merge 5 commits into from

Conversation

beyond1920
Copy link
Contributor

@beyond1920 beyond1920 commented Apr 3, 2019

What is the purpose of the change

Introduce Rank and Deduplicate operators for blink streaming runtime

Brief change log

  • StreamExecExchange, StreamExecRank, StreamExecDeduplicate implements StreamExecNode
  • Introduce RankFunctions and DeduplicateFunctions.
  • Fix some minor bug

Verifying this change

 IT Case

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes, for test)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no )
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (yes)
  • If yes, how is the feature documented? (JavaDocs)

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 3, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

Copy link
Contributor

@KurtYoung KurtYoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The indent config for your IDE seems not configured well, there are lots of indent issues.
I still need some more time to go through all rank function related classes.


val rowTypeInfo = inputTransform.getOutputType.asInstanceOf[BaseRowTypeInfo]

val generateRetraction = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think keep first row will not generate retraction?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whether generate Retraction could be inferred based on Retraction rules, which is done in FLINK-12098. I could add a TODO message here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update here.

@KurtYoung
Copy link
Contributor

BTW, i think we need some dedicated tests for all rank functions.

@beyond1920 beyond1920 force-pushed the flink-12017 branch 3 times, most recently from 615ae4b to 8e25cfd Compare April 10, 2019 03:03
protected long hitCount = 0L;
protected long requestCount = 0L;

AbstractRankFunction(long minRetentionTime, long maxRetentionTime, BaseRowTypeInfo inputRowType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use inputArity outputArity instead of inputRowType and outputRowType

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inputRowType is still useful in the other case, outputRowType could be removed.

private transient Map<BaseRow, TopNBuffer> kvSortedMap;

public AppendRankFunction(
long minRetentionTime, long maxRetentionTime, BaseRowTypeInfo inputRowType, BaseRowTypeInfo outputRowType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use BaseRowSerializer instead of inputRowType

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inputRowType need to used when create StateDescriptor.
Such as :
ListTypeInfo<BaseRow> valueTypeInfo = new ListTypeInfo<>(inputRowType); MapStateDescriptor<BaseRow, List<BaseRow>> mapStateDescriptor = new MapStateDescriptor( "data-state-with-append", sortKeyType, valueTypeInfo);

@beyond1920 beyond1920 force-pushed the flink-12017 branch 4 times, most recently from 4f9f70c to f88533d Compare April 12, 2019 09:33
with StreamPhysicalRel
with StreamExecNode[BaseRow] {

private val DEFAULT_MAX_PARALLELISM = 1 << 7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

too small?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable's name is confusing, it's not max parallelism of operators, but max number of key-groups. Maybe it's better to use StreamGraphGenerator.DEFAULT_LOWER_BOUND_MAX_PARALLELISM or KeyGroupRangeAssignment.DEFAULT_LOWER_BOUND_MAX_PARALLELISM

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 . I'm in favor of the later one which is also used in KeyedStream.java#134 and has a clear javadoc description.

super.open(configure);
String stateName = keepLastRow ? "DeduplicateFunctionCleanupTime" : "DeduplicateFunctionCleanupTime";
initCleanupTimeState(stateName);
ValueStateDescriptor rowStateDesc = new ValueStateDescriptor("rowState", rowTypeInfo);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case of firstRow, only pk is needed for state, we don't have to store the whole row.

Copy link
Member

@wuchong wuchong Apr 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I think we should only store the PK in state here.

If we only store PK, the ideal state schema should be ValueState<Boolean>, but this can't share the same state with LastRow mode. Maybe we need to separate the implementation for firstRow and lastRow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, only keepLastRow and generate retract is true, we need to store complete row, else store pk is ok. Thanks.

@beyond1920 beyond1920 force-pushed the flink-12017 branch 2 times, most recently from 2c6eb76 to b049244 Compare April 13, 2019 00:49
Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @beyond1920 for the great work!

I think this PR is close to ready. Except we need to polish DeduplicateFunction a bit more, I only some other minor comments.

Cheers,
Jark

Copy link
Contributor Author

@beyond1920 beyond1920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wuchong thanks for your suggestions. I split keepFirstRow and keepLastRow into different functions. And Update the API of BaseRowKeySelector.

@beyond1920 beyond1920 force-pushed the flink-12017 branch 5 times, most recently from bce8cf6 to 8cd47d6 Compare April 15, 2019 03:50
@wuchong
Copy link
Member

wuchong commented Apr 15, 2019

@flinkbot approve all.

+1 to merge.

Wait until Travis turns green.

@beyond1920 beyond1920 force-pushed the flink-12017 branch 3 times, most recently from 1d79a64 to 288fa78 Compare April 15, 2019 07:27
2. Introduce SortedMapSerializerSnapshot to do snapshot for SortedMapTypeInfo.
… DeduplicateKeepLastRowFunction

2. other minor update.
@wuchong
Copy link
Member

wuchong commented Apr 16, 2019

Merging...

@beyond1920 beyond1920 changed the title [FLINK-12017][table-planner-blink] Support translation from Rank/Deduplicate to StreamTransformation [FLINK-12017][table-runtime-blink] Introduce Rank and Deduplicate operators for blink streaming runtime Apr 16, 2019
@asfgit asfgit closed this in 8241954 Apr 16, 2019
HuangZhenQiu pushed a commit to HuangZhenQiu/flink that referenced this pull request Apr 22, 2019
tianchen92 pushed a commit to tianchen92/flink that referenced this pull request May 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants