Skip to content

Conversation

@godfreyhe
Copy link
Contributor

What is the purpose of the change

Separate the implementation of batch group aggregate nodes, including BatchExecHashAggregate, BatchExecLocalHashAggregate, BatchExecSortAggregate, BatchExecLocalSortAggregate, BatchExecPythonGroupAggregate

Brief change log

  • Rename BatchExecGroupAggregateBase to BatchPhysicalGroupAggregateBase and do some refactoring
  • Introduce BatchPhysicalHashAggregate, and make BatchExecHashAggregate only extended from ExecNode
  • Introduce BatchPhysicalLocalHashAggregate, and make BatchPhysicalLocalHashAggregate only extended from FlinkPhysicalRel
  • Introduce BatchPhysicalSortAggregate, and make BatchExecSortAggregate only extended from ExecNode
  • Introduce BatchPhysicalLocalSortAggregate, and make BatchPhysicalLocalSortAggregate only extended from FlinkPhysicalRel
  • Introduce BatchPhysicalPythonGroupAggregate, and make BatchExecPythonGroupAggregate only extended from ExecNode

Verifying this change

This change is a refactoring rework covered by existing tests.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

flinkbot commented Jan 5, 2021

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit b882bd3 (Fri May 28 07:02:00 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Jan 5, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@godfreyhe godfreyhe force-pushed the FLINK-20738 branch 2 times, most recently from 662db1d to f08790b Compare January 6, 2021 05:52
Copy link
Contributor

@wenlong88 wenlong88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overall change LGTM, just left some minor comments.


/** Return bytes size for given option in {@link TableConfig}. */
public static long getMemorySize(TableConfig tableConfig, ConfigOption<String> option) {
return MemorySize.parse(tableConfig.getConfiguration().getString(option)).getBytes();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ConfigOption supports type of MemorySize, can we just use it?,ref:FRAMEWORK_HEAP_MEMORY

Copy link
Contributor Author

@godfreyhe godfreyhe Jan 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will broke the current API, because the compile error will occur when user's before code is configuration.set( ExecutionConfigOptions.TABLE_EXEC_RESOURCE_HASH_AGG_MEMORY, "128 mb")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I create an issue to do the improvement, see https://issues.apache.org/jira/browse/FLINK-20879

// if group by an update field or group by a field mono is null, just return null
if (inputMonotonicity == null ||
grouping.exists(e => inputMonotonicity.fieldMonotonicities(e) != CONSTANT)) {
grouping.exists(e => inputMonotonicity.fieldMonotonicities(e) != CONSTANT)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the old format is better

// if group by a update field or group by a field mono is null, just return null
if (inputMonotonicity == null ||
grouping.exists(e => inputMonotonicity.fieldMonotonicities(e) != CONSTANT)) {
grouping.exists(e => inputMonotonicity.fieldMonotonicities(e) != CONSTANT)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

val currentMono = fieldMonotonicities(index)
if (childMono != currentMono &&
!aggCall.getAggregation.isInstanceOf[SqlCountAggFunction]) {
!aggCall.getAggregation.isInstanceOf[SqlCountAggFunction]) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

… to BatchPhysicalGroupAggregateBase and do some refactoring
…te, and make BatchExecHashAggregate only extended from ExecNode
…gregate, and make BatchPhysicalLocalHashAggregate only extended from FlinkPhysicalRel
…te, and make BatchExecSortAggregate only extended from ExecNode
…gregate, and make BatchPhysicalLocalSortAggregate only extended from FlinkPhysicalRel
…Aggregate, and make BatchExecPythonGroupAggregate only extended from ExecNode
Copy link
Contributor

@wenlong88 wenlong88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

godfreyhe added a commit to godfreyhe/flink that referenced this pull request Jan 7, 2021
…te, and make BatchExecHashAggregate only extended from ExecNode

This closes apache#14562
godfreyhe added a commit to godfreyhe/flink that referenced this pull request Jan 7, 2021
…gregate, and make BatchPhysicalLocalHashAggregate only extended from FlinkPhysicalRel

This closes apache#14562
godfreyhe added a commit to godfreyhe/flink that referenced this pull request Jan 7, 2021
…te, and make BatchExecSortAggregate only extended from ExecNode

This closes apache#14562
godfreyhe added a commit to godfreyhe/flink that referenced this pull request Jan 7, 2021
…gregate, and make BatchPhysicalLocalSortAggregate only extended from FlinkPhysicalRel

This closes apache#14562
godfreyhe added a commit to godfreyhe/flink that referenced this pull request Jan 7, 2021
…Aggregate, and make BatchExecPythonGroupAggregate only extended from ExecNode

This closes apache#14562
@godfreyhe godfreyhe closed this in 146c68d Jan 7, 2021
@godfreyhe godfreyhe deleted the FLINK-20738 branch January 7, 2021 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants