New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-19823][table][fs-connector] Filesystem connector supports de/serialization schema #13957
Conversation
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit eded6a8 (Fri Nov 06 08:33:12 UTC 2020) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
public Reader restoreReader(Configuration config, FileSourceSplit split) throws IOException { | ||
Reader reader = new Reader(config, split); | ||
reader.seek(split.getReaderPosition().get().getRecordsAfterOffset()); | ||
return null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return reader?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Streaming restore case: no test cover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But there is no way, because the current file system does not support streaming reading
// NOTE, we need pass full format types to deserializationFormat | ||
DeserializationSchema<RowData> decoder = deserializationFormat.createRuntimeDecoder( | ||
createSourceContext(context), getFormatDataType()); | ||
int[] projectedFields = IntStream.of(0, schema.getFieldCount()).toArray(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IntStream.range
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Streaming sink compaction case: no test cover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add Json Compaction test
createTable("sink", sink.toURI().toString(), false); | ||
|
||
tEnv().executeSql("insert into sink select * from source").await(); | ||
CloseableIterator<Row> iter = tEnv().executeSql("select * from sink").collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Select some fields to trigger the projection push down? We can write one more column into the sink, e.g UPPER(name)
, and then select out the upper_name
instead of the `name.
@@ -94,6 +94,10 @@ | |||
*/ | |||
public class BinaryRowDataTest { | |||
|
|||
public static void main(String[] args) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
What is the purpose of the change
Integrate de/serialization schema to Filesystem connector.
Brief change log
DeserializationSchemaAdapter
SerializationSchemaAdapter
FileSystemFormatFactory
sFileSystemTableSource
andFileSystemTableSink
getChangelogMode
Verifying this change
This change is already covered by existing tests*.
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: noDocumentation