Skip to content

Conversation

@wuchong
Copy link
Member

@wuchong wuchong commented Jun 14, 2019

What is the purpose of the change

The primary key and unique key is a standard meta information in SQL. And they are important information for optimization, for example, AggregateRemove, AggregateReduceGrouping and state layout optimization for TopN and Join.

So in this PR, we extend TableSchema to carry more information about primary key and unique keys. So that the TableSource can declare this meta information.

The primary key and unique key information will only work in Blink planner currently. Flink planner will just ignore the information.

Brief change log

This pull request contains 2 commits:

  1. Add primary key and unique key to TableSchema with some Javadocs.
  2. Use the primary key and unique key information in Blink planner.
    • We did some refactor to TableSourceTable to make it only build the FlinkStatistic from TableSource instead of passing in from outside. Along with some tests changes.

Verifying this change

  1. Adds a TableSchemaTest to check the schema builder methods.
  2. Adds FlinkRelMdUniqueKeysTest#testGetUniqueKeysOnTableSourceScan test to check the optimizer can get the unique key information from TableSchema from TableSource.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (yes)
  • If yes, how is the feature documented? (not documented)

@wuchong
Copy link
Member Author

wuchong commented Jun 14, 2019

@godfreyhe , do you have time to have a look?

@flinkbot
Copy link
Collaborator

flinkbot commented Jun 14, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit dda9e2c (Wed Dec 04 15:21:18 UTC 2019)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@wuchong wuchong changed the title Table schema [FLINK-12846][table] Carry primary key and unique key information in TableSchema Jun 14, 2019
Arrays.equals(fieldDataTypes, schema.fieldDataTypes);
Arrays.equals(fieldDataTypes, schema.fieldDataTypes) &&
Arrays.equals(primaryKey, schema.primaryKey) &&
Arrays.equals(uniqueKeys, schema.uniqueKeys);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arrays.equals doesn't "work" for two dimensional arrays. Use Arrays.deepEquals instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! I also change the hashcode to Arrays.deepHashCode.

.field("b", DataTypes.STRING())
.field("c", DataTypes.BIGINT());

String expected = "root\n |-- a: INT\n |-- b: STRING\n |-- c: BIGINT\n";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

field type should come with nullable information.
seems we should modify DataType.toString().

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default is nullable, and nullable will not be print in toString. If it is not null, the toString will be INT NOT NULL.

/**
* Tests for {@link TableSchema}.
*/
public class TableSchemaTest {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add equals test(especially for uniqueKeys).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

Copy link
Contributor

@docete docete left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM, and leave some comments.

@wuchong
Copy link
Member Author

wuchong commented Jun 14, 2019

Thanks @docete , I have addressed the comments.

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wuchong thanks for this PR, i leave some comments.

* <p>A table schema that represents a table's structure with field names and data types and some
* constraint information (e.g. primary key, unique key).</p><br/>
*
* <p>Concepts about primary key and unique key:</p>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we distinguish primary key from unique key? in current javadoc, they have no difference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference between primary key and unique key is that there is only one primary key and there can be more than one unique key. And a primary key doesn't need to be declared in unique key list again.

I will add this to the class Javadoc.

throw new IllegalArgumentException("The field '" + field +
"' is not existed in the schema.");
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that means we must build fieldNames first ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I think so.

if (uniqueKeys == null) {
uniqueKeys = new ArrayList<>();
}
uniqueKeys.add(Arrays.asList(fields));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should distinct uniqueKey ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will add that.

class TableSourceTable[T](
val tableSource: TableSource[T],
val isStreaming: Boolean,
val statistic: FlinkStatistic)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the statistic is deleted, how to store the statistic from catalog?

Copy link
Contributor

@godfreyhe godfreyhe Jun 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another scenario: rules (like PushProjectIntoTableSourceScanRule) does not change statistics , so the new TableSource created by the rule could reuse the original TableSource, and avoid to call TableSource#getTableStats method which is high cost.
so the def copy(statistic: FlinkStatistic): FlinkTable method defined in FlinkTable should not be deleted too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. the statistic from catalog should be restored via TableSource#getTableStats().
  2. I think if the TableSource is not changed, then we don't need to re-construct a new TableSourceTable. I can reuse the original TableSourceTable and avoid calling TableSource#getTableStats again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. TableSource should be decoupled with Catalog, or add a method like setTableStats for TableSource interface.
  2. The TableSource may be changed, e.g. project push down into table source

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Yes, maybe we need something like setTableStats, but this is out of the scope of this issue.
  2. If the TableSource is changed, shouldn't we always to create a new TableSourceTable and getTableStats() again? How do we know the stats is not changed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I do not think it‘s a good idea to add setTableStats for TableSource, one big reason is each TableSource must implement this method and default implementation does not work due to a TableSource does not know whether the catalog could provide its stats.

  2. yes, TableSource is immutable in TableSourceTable. currently, whether use the original stats or use unknown stats for new TableSourceTable is decided in each rule. you could reference this related code in PushProjectIntoTableSourceScanRule and PushFilterIntoTableSourceScanRule in blink inner code.

types: Array[TypeInformation[_]],
fields: Array[String],
statistic: FlinkStatistic = FlinkStatistic.UNKNOWN): Table
statistic: TableStats,
Copy link
Contributor

@godfreyhe godfreyhe Jun 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may add more info for FlinkStatistic in future, use FlinkStatistic instead of each fields to make sure this method and related test cases are stable. relModifiedMonotonicity is also a member of FlinkStatistic and is not defined in this method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the more information we need add to FlinkStatistic, I think it should also be included in TableStats. Regarding to the relModifiedMonotonicity, it is only be used internally in intermediate table source (IntermediateRelTable) which keeps FlinkStatistic as the constructor parameter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the problem using FlinkStatistic ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We removed the FlinkStatistic from the constructor of TableSourceTable to make the statistic deriving simple. That's why we can't build a TableSourceTable from FlinkStatistic.

new TableSourceTable[BaseRow](tableSource, false)
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete redundant blank line

@wuchong
Copy link
Member Author

wuchong commented Nov 15, 2019

This PR will be splitted into several PRs. Let's continue the first one in #10213.

@wuchong wuchong closed this Nov 15, 2019
@wuchong wuchong deleted the tableSchema branch November 15, 2019 08:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants