Skip to content

Conversation

@JingsongLi
Copy link
Contributor

What is the purpose of the change

Hive orc use java.sql.Timestamp to read and write orc files... default, timestamp will consider time zone to adjust seconds.
Our vector Orc reader should use java.sql.Timestamp to read for respecting time zone

Brief change log

  • OrcTimestampColumnVector should get SqlTimestamp from java.sql.Timestamp.
  • AbstractOrcColumnVector.createTimestampVector should fill data with java.sql.Timestamp.

Verifying this change

OrcColumnarRowSplitReaderTest

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no

@JingsongLi
Copy link
Contributor Author

@flinkbot
Copy link
Collaborator

flinkbot commented Dec 5, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit c4582e2 (Thu Dec 05 03:57:10 UTC 2019)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

Copy link
Contributor

@xuefuz xuefuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. I left two minor comments for consideration.

Timestamp timestamp = new Timestamp(millisecond);
timestamp.setNanos(nanoOfSecond);
Timestamp timestamp = value instanceof LocalDateTime ?
Timestamp.valueOf((LocalDateTime) value) : (Timestamp) value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it doesn't matter much, but I'm curious if we need to deal with both types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

java.sql.Timestamp is the default format in hive world, but LocalDateTime is the default format in flink world.
Whatever, It must be correct that support all.

col2.nanos[i] = i;

Timestamp timestamp = Timestamp.valueOf(
padZero(4, i + 1000) + "-01-01 00:00:00." + i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test would cover more cases if the values are more representative (rather than a lot of zeros for parts of the timestamp).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it has test 1000-00-00 until 2023-00-00. But I can assign all values too.

@flinkbot
Copy link
Collaborator

flinkbot commented Dec 5, 2019

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build

return SqlTimestamp.fromEpochMillis(
vector.time[index],
SqlTimestamp.isCompact(precision) ? 0 : vector.nanos[index] % 1_000_000);
Timestamp timestamp = new Timestamp(vector.time[index]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference with the original one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Original one directly use the underlying long and int to construct SqlTimestamp.
But hive orc is using java.sql.Timestamp to construct underlying data. You can understand like:

java.sql.Timestamp orcTimestamp;
SqlTimestamp.fromEpochMillis(orcTimestamp.getTime(), orcTimestamp.getNano());
VS
SqlTimestamp.fromString(orcTimestamp.toString());

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the later one will be influenced by local time zone?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Contributor

@KurtYoung KurtYoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, will merge this after travis

@JingsongLi
Copy link
Contributor Author

@KurtYoung KurtYoung merged commit 61f9f2f into apache:master Dec 5, 2019
@JingsongLi JingsongLi deleted the zone branch January 14, 2020 03:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants