-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-30922][table-planner] Apply persisted columns when doing appendPartitionAndNullsProjects #21897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-30922][table-planner] Apply persisted columns when doing appendPartitionAndNullsProjects #21897
Conversation
@flinkbot run azure |
1 similar comment
@flinkbot run azure |
@PatrickRen could you please help figure out why it didn't trigger the azure run after the |
@shuiqiangchen You need to rebase your PR on the latest changes in |
996be99
to
ca74da7
Compare
@MartijnVisser Thank you for reminding. It worked after rebasing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shuiqiangchen thanks for fixing this! I've left some comments, and we'd better also add tests for the sink table which has computed columns(virtual and non-virtual)
|
||
/** | ||
* Return {@link TableSchema} which consists of all persisted columns. That means, the virtual | ||
* computed columns and metadata columns are filterd out. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: -> filtered
* computed columns and metadata columns are filterd out. | ||
* | ||
* <p>Readers(or writers) such as {@link TableSource} and {@link TableSink} should use this | ||
* persisted schema to generate {@link TableSource#getProducedDataType()} and {@link |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users should know the difference between this method and getPhysicalSchema
* TableSource#getTableSchema()} rather than using the raw TableSchema which may contains | ||
* additional columns. | ||
*/ | ||
public static TableSchema getPersistedSchema(TableSchema tableSchema) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'd better extract a common method for this and getPhysicalSchema
since only one condition differs
ca74da7
to
d6ddc18
Compare
@lincoln-lil Thank you for your comments. I have added a test case for partialInsert with computed columns and optimized the comments according to your suggestion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@shuiqiangchen could you also create backport pr to release-1.17 branch? |
…ns in column list of a partial-insert statement This closes apache#21897 (cherry picked from commit a2d78e6)
@lincoln-lil Thank you, I will do it. |
What is the purpose of the change
Fix the bug that it excludes all metadata and computed columns when doing preValidateRewrite with partial insert.
Brief change log
Verifying this change
This change has test cases covered in PartialInsertTest.
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: (no)Documentation