Skip to content

Conversation

@JingsongLi
Copy link
Contributor

What is the purpose of the change

  - name: TableNumber2
    type: source
    $VAR_UPDATE_MODE
    schema:
      - name: IntegerField2
        type: INT
      - name: StringField2
        type: VARCHAR
      - name: TimestampField3
        type: TIMESTAMP
    connector:
      type: filesystem
      path: "$VAR_SOURCE_PATH2"
    format:
      type: csv
      fields:
        - name: IntegerField2
          type: INT
        - name: StringField2
          type: VARCHAR
        - name: TimestampField3
          type: TIMESTAMP
      line-delimiter: "\n"
      comment-prefix: "#"

Table like this will fail in SQL-CLI.
The root cause is we will convert the properties into CatalogTableImpl and then convert into properties again. The schema type properties will use new type systems then which is not equal to the legacy types.

Brief change log

I think the fix we can do could be comparing LogicalType in TableSchema.equals/hashCode instead of DataType with conversion classes.

Verifying this change

LocalExecutorITCase

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 453d2f6 (Tue Feb 25 04:35:27 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Feb 25, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@wuchong
Copy link
Member

wuchong commented Feb 25, 2020

I'm still not sure about this approach. Because the DataType stored in the TableSchema may still hold the different conversion classes.

Btw, if we want to go with this approach, we should create another JIRA issue to have an explicit title and a Table SQL/API module. Otherwise, the API change is too silent.

@JingsongLi
Copy link
Contributor Author

I'm still not sure about this approach. Because the DataType stored in the TableSchema may still hold the different conversion classes.

Btw, if we want to go with this approach, we should create another JIRA issue to have an explicit title and a Table SQL/API module. Otherwise, the API change is too silent.

Created https://issues.apache.org/jira/browse/FLINK-16270 .

This is a pit, if not repaired, I think it will lead to more problems.

Conversion classes should not exist in TableSchema, but for legacy planner, it is hard to remove this.

The origin of this problem is in DescriptorProperties, it not serialize conversion classes in TableSchema. After putTableSchema and getTableSchema, conversion classes changed.

@wuchong
Copy link
Member

wuchong commented Feb 25, 2020

If the TableSchema only accept/store new types, I think it's fine to refactor the equals. However, as you said, legacy planner is still using the old types in TableSchema. That's why I'm not sure whether there is problem to refactor equals.

@JingsongLi
Copy link
Contributor Author

If the TableSchema only accept/store new types, I think it's fine to refactor the equals. However, as you said, legacy planner is still using the old types in TableSchema. That's why I'm not sure whether there is problem to refactor equals.

Whether it's legacy or blink, conversion classes in TableSchema have no actual meaning.
The role of conversion classes in legacy planner is to get the old SqlTimestampTypeInfo. Not mean it treats different conversion classes differently.

@JingsongLi JingsongLi closed this Feb 26, 2020
@JingsongLi JingsongLi deleted the csve2ebug branch April 26, 2020 05:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants