Skip to content

Conversation

@lirui-apache
Copy link
Contributor

…Sink

What is the purpose of the change

Make HiveTableSink implement PartitionableTableSink, so that HiveTableSink supports static partitioning.

Brief change log

  • Implemented PartitionableTableSink.
  • Moved another test case from HiveTableOutputFormatTest to HiveTableSinkTest.

Verifying this change

Existing test case.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? NA

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 3, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 8e18c2d (Wed Aug 07 08:15:00 UTC 2019)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ✅ 1. The [description] looks good.
  • ✅ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ✅ 4. The change fits into the overall [architecture].
  • ✅ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@lirui-apache
Copy link
Contributor Author

cc @zjuwangg @bowenli86 @xuefuz

Preconditions.checkArgument(numStaticPart == 0,
"Dynamic partition cannot appear before static partition");
} else {
numStaticPart--;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line doesn't seem having any effect.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It actually does. We'll use it to make sure we don't see any dynamic partition columns before we have seen all the static ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. After re-read the code, the logic becomes clear. Looks good.

int numStaticPart = staticPartitionSpec.size();
if (numStaticPart < partitionCols.size()) {
for (String partitionCol : partitionCols) {
if (!staticPartitionSpec.containsKey(partitionCol)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this if condition will be true (so the assertion will fail) at a certain point as long as it's dynamic partition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. And when it's true, we will verify that we have checked all the static partition columns.

Copy link
Contributor

@xuefuz xuefuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. I left a couple of minor comments for consideration.

// make it a LinkedHashMap to maintain partition column order
staticPartitionSpec = new LinkedHashMap<>();
for (String partitionCol : getPartitionFieldNames()) {
if (partitions.containsKey(partitionCol)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if it doesn't contain the key?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It means that partition column is not contained in the static spec, and therefore it's a dynamic partition column.


private void validatePartitionSpec() {
List<String> partitionCols = getPartitionFieldNames();
Preconditions.checkArgument(new HashSet<>(partitionCols).containsAll(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments here:

  • can we output which column is unknown to the error message?

  • the check of new HashSet<>(partitionCols).containsAll(staticPartitionSpec.keySet()) will lose order of the partitions, which seems to be not enough for a legit validation. Shall it be partitionCols.equals(new ArrayList(staticPartitionSpec.keySet()))? since staticPartitionSpec is a linked hashmap, keys from its keySet() should be ordered

  • nit: can we reformat it to be more readable?

Preconditions.checkArgument(
    ...
    "...";

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'll print the specific columns and reformat the code.
We don't need the check the order here. Partition columns order is defined by getPartitionFieldNames(). We can always reorder a partition spec (which is a map) as long as it only contains valid partition columns.

@lirui-apache
Copy link
Contributor Author

Thanks @xuefuz and @bowenli86 for the review. Please take another look.

@lirui-apache
Copy link
Contributor Author

CI passed on my personal repo:
https://travis-ci.org/lirui-apache/flink/builds/554079872

Copy link
Contributor

@xuefuz xuefuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@lirui-apache
Copy link
Contributor Author

The CI failure is due to flink-table-planner-blink failed checkstyle and thus not related to this PR.

@bowenli86
Copy link
Member

LGTM, thanks for your contribution!

@flinkbot approve all

I've restarted the CI since it's failing at the compile stage. Will merge it once it passes

@zentol
Copy link
Contributor

zentol commented Jul 5, 2019

CI report for commit 24f08d4: FAILURE Travis

@bowenli86
Copy link
Member

@lirui-apache can you rebase the PR?

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 8, 2019

CI report for commit 24f08d4: FAILURE Build

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 9, 2019

CI report for commit 8e18c2d: FAILURE Build

@lirui-apache
Copy link
Contributor Author

The test failure cannot be reproduced locally.
Travis build of my own repo passed at: https://travis-ci.org/lirui-apache/flink/builds/556113232

@lirui-apache
Copy link
Contributor Author

PR rebased. @bowenli86 please take another look, thanks.

@bowenli86
Copy link
Member

LGTM. Merging

@asfgit asfgit closed this in d5f8078 Jul 9, 2019
asfgit pushed a commit that referenced this pull request Jul 9, 2019
Resolve bad merge between #8965 and #8987 though they passed CI separately.
@lirui-apache lirui-apache deleted the FLINK-13068 branch July 10, 2019 03:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants