Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-18988][table] Continuous query with LATERAL and LIMIT produces… #13291

Closed
wants to merge 4 commits into from

Conversation

danny0405
Copy link
Contributor

… wrong result

The batch mode rank only supports RANK function, so we only rewrite the
stream mode query.

What is the purpose of the change

Fix the query of pattern:

SELECT state, name
FROM
  (SELECT DISTINCT state FROM cities) states,
  LATERAL (
    SELECT name, pop
    FROM cities
    WHERE state = states.state
    ORDER BY pop
    DESC LIMIT 3
  )

Before the patch, the query generates a wrong plan then wrong results.

Brief change log

  • Add a new rule CorrelateSortToRankRule for the rewrite
  • Add plan test and IT test

Verifying this change

Added tests.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? not documented

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 1, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 8406cd5 (Tue Sep 01 03:33:51 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 1, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

*
* <p>This rule can only be used in HepPlanner.
*/
class CorrelateSortToRankRule extends RelOptRule(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

side comment: Our long-term goal is to get rid of Scala. This class could have been implemented easily in Java. Please keep that in mind for future contributions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reminder, i saw most of the rules are implemented as scala code when contribution, do you mean we prefer java rules in the future ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See FLIP-32 Appendix: Porting Guidelines.

A new planner rule or node that only depends on Calcite and runtime classes should be implemented in Java.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for the share ~

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We rework so many classes all the time, eventually the Scala code will hopefully be gone at some point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it would be exciting if all the code can switch to Java.

Copy link
Contributor

@twalthr twalthr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to base an example on this PR. But the results differ between batch and streaming mode. It seems that the batch mode now outputs the global maximum.

@danny0405
Copy link
Contributor Author

I'm trying to base an example on this PR. But the results differ between batch and streaming mode. It seems that the batch mode now outputs the global maximum.

The patch is only for streaming purpose, let me check if batch mode is supported.

… wrong result

The batch mode rank only supports RANK function, so we only rewrite the
stream mode query.
Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix @danny0405, I left some comments.
btw, I find the blink batch planner does not support the given query, and I get some errors in sql-client, like "org.apache.flink.table.api.TableException: unexpected correlate variable $cor1 in the plan"

Comment on lines 95 to 102
// rewrite before decorrelation
chainedProgram.addLast(
PRE_DECORRELATE_REWRITE,
FlinkHepRuleSetProgramBuilder.newBuilder
.setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
.setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
.add(FlinkStreamRuleSets.PRE_DECORRELATION_RULES)
.build())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should merge this program into DECORRELATE

Comment on lines +41 to +61
* {{{
* LogicalProject(state=[$0], name=[$1])
* +- LogicalCorrelate(correlation=[$cor0], joinType=[inner], requiredColumns=[{0}])
* :- LogicalAggregate(group=[{0}])
* : +- LogicalProject(state=[$1])
* : +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* +- LogicalSort(sort0=[$1], dir0=[DESC-nulls-last], fetch=[3])
* +- LogicalProject(name=[$0], pop=[$2])
* +- LogicalFilter(condition=[=($1, $cor0.state)])
* +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* }}}
*
* <p>would be transformed to
*
* {{{
* LogicalProject(state=[$0], name=[$1])
* +- LogicalProject(state=[$1], name=[$0], pop=[$2])
* +- LogicalRank(rankType=[ROW_NUMBER], rankRange=[rankStart=1, rankEnd=3],
* partitionBy=[$1], orderBy=[$2 DESC], select=[name=$0, state=$1, pop=$2])
* +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* }}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems the rewrite is not correct, consider the following example:

SELECT state, name 
FROM
  (SELECT DISTINCT state FROM cities1) states,
  LATERAL (
    SELECT name, pop
    FROM cities2
    WHERE state = states.state
    ORDER BY pop
    DESC LIMIT 3
  );

The outer table (cities1) and the inner table (cities2) are different.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is why i added the match condition aggInput.getInput.getDigest.equals(filter.getInput.getDigest). If the outer and inner table are different, the rule can not be matched.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation

@danny0405
Copy link
Contributor Author

danny0405 commented Nov 17, 2020

Thanks for the fix @danny0405, I left some comments.
btw, I find the blink batch planner does not support the given query, and I get some errors in sql-client, like "org.apache.flink.table.api.TableException: unexpected correlate variable $cor1 in the plan"

Yes, because our batch rank only supports rank type as RANK, see BatchExecRankRule match condition, while here we use the ROW_NUMBER.

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add CorrelateSortToRankRuleTest which only involves the minimal rule set to verify the logic of CorrelateSortToRankRule, including the supported cases and unsupported cases

Comment on lines +41 to +61
* {{{
* LogicalProject(state=[$0], name=[$1])
* +- LogicalCorrelate(correlation=[$cor0], joinType=[inner], requiredColumns=[{0}])
* :- LogicalAggregate(group=[{0}])
* : +- LogicalProject(state=[$1])
* : +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* +- LogicalSort(sort0=[$1], dir0=[DESC-nulls-last], fetch=[3])
* +- LogicalProject(name=[$0], pop=[$2])
* +- LogicalFilter(condition=[=($1, $cor0.state)])
* +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* }}}
*
* <p>would be transformed to
*
* {{{
* LogicalProject(state=[$0], name=[$1])
* +- LogicalProject(state=[$1], name=[$0], pop=[$2])
* +- LogicalRank(rankType=[ROW_NUMBER], rankRange=[rankStart=1, rankEnd=3],
* partitionBy=[$1], orderBy=[$2 DESC], select=[name=$0, state=$1, pop=$2])
* +- LogicalTableScan(table=[[default_catalog, default_database, cities]])
* }}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation

val agg: Aggregate = call.rel(1)
if (agg.getAggCallList.size() > 0
|| agg.getGroupSets.size() > 1
|| agg.getGroupSet.cardinality() != 1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agg.getGroupSet.cardinality() != 1

we should support multiple equal conditions, such as: state = states.state and name = states.name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but the code would complicate a lot, we can support it in the future. In general, one equal condition is enough, the project distinct value in order to avoid unnecessary Cartesian Product.

Comment on lines 141 to 142
val oriBuilder = call.builder()
val builder = FlinkRelBuilder.of(oriBuilder.getCluster, oriBuilder.getRelOptSchema)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: one simply way is: using FlinkRelFactories.FLINK_REL_BUILDER to construct RelOptRule, and then we can cast call.builder() as FlinkRelBuilder. one example is FlinkSubQueryRemoveRule

RankType.ROW_NUMBER,
new ConstantRankRange(
1,
sort.fetch.asInstanceOf[RexLiteral].getValueAs(classOf[java.lang.Long])),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can use SortUtil.getLimitEnd

val INSTANCE = new CorrelateSortToRankRule
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: redundant line

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update, LGTM overall, there is only a minor comment. @twalthr do you have any other concern?

val newCollation = RelCollations.of(newFieldCollations)

val newRel = builder
.push(filter.getInput()).asInstanceOf[FlinkRelBuilder]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cast is redundant

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks ~

@danny0405
Copy link
Contributor Author

Thanks @godfreyhe , i have addressed the review comments, can you take a look again ~ Thanks so much in advance ~

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@twalthr
Copy link
Contributor

twalthr commented Nov 24, 2020

Thanks @danny0405 and @godfreyhe. I will merge this now...

twalthr pushed a commit to twalthr/flink that referenced this pull request Nov 24, 2020
… wrong result

This closes apache#13291.
The batch mode rank only supports RANK function, so we only rewrite the
stream mode query.
@twalthr twalthr closed this in a3320d1 Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants