New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-24687][table] Move FileSystemTableSource/Sink in flink-connector-files #17897
Conversation
8a51fc1
to
620501a
Compare
620501a
to
1ade96b
Compare
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 620501a (Wed Nov 24 15:32:22 UTC 2021) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
1ade96b
to
6eeff57
Compare
Awesome stuff! Please make sure to rebase to current master and add this connector to flink-architecture-tests 😃 |
The files connector is already in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing all the effort and disentangling the dependencies 👏
Overall, the changes look correct but I would propose to change the commit structure. In general [hotfix]
commits are only useful for really trivial changes i.e. fix license header formatting but not for related work of a certain ticket.
I'd propose all commits on this PR should include the ticket number and it would also be great to add the modules either [table][file][orc]....
Can you also double-check the classes you have moved to the file connector package that they are annotated with @Internal
if users should not use them directly?
...ble/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
Outdated
Show resolved
Hide resolved
loadResult -> { | ||
if (loadResult.failed()) { | ||
if (loadResult.getError() instanceof NoClassDefFoundError) { | ||
LOG.debug( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely write a test to verify that the discovery does not fail if the base interface is not found
.forEachRemaining( | ||
loadResult -> { | ||
if (loadResult.failed()) { | ||
if (loadResult.getError() instanceof NoClassDefFoundError) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we be more precise about which error we want to catch? So far we only want to catch the exception for the flink-connector-files
interfaces but I do not think we should continue in general for NoClassDefFoundError
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is that NoClassDefFoundError
and the cause ClassNotFoundException
both have no method to get the class name that failed to load. Reference:
- https://docs.oracle.com/javase/8/docs/api/java/lang/ClassNotFoundException.html
- https://docs.oracle.com/javase/8/docs/api/java/lang/NoClassDefFoundError.html
I could extract the class name from the exception message, but i feel like it could be an unstable method, and might break with certain jdks... WDYT? Do you have some alternatives in mind?
public boolean hasNext() { | ||
try { | ||
return iterator.hasNext(); | ||
} catch (Throwable t) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of Throwable
do we expect here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html#iterator-- for more details, but in short both hasNext()
and next()
might fail according to this javadoc, and they can throw any kind of error:
Its hasNext and next methods can therefore throw a ServiceConfigurationError if a provider-configuration file violates the specified format, or if it names a provider class that cannot be found and instantiated, or if the result of instantiating the class is not assignable to the service type, or if any other kind of exception or error is thrown as the next provider is located and instantiated.
@@ -53,12 +52,14 @@ class BatchPhysicalLegacySinkRule extends ConverterRule( | |||
val dynamicPartIndices = | |||
dynamicPartFields.map(partitionSink.getTableSchema.getFieldNames.indexOf(_)) | |||
|
|||
// TODO This option is hardcoded to remove the dependency of planner from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already have a follow-up ticket to introduce a separate option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JingsongLi do you know if this option is still actively used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know of others who use it because we don't have the partition by
syntax on SQL.
...rs/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/BinPacking.java
Show resolved
Hide resolved
...files/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactContext.java
Show resolved
Hide resolved
...iles/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactOperator.java
Show resolved
Hide resolved
flink-table/flink-table-uber/pom.xml
Outdated
@@ -88,6 +88,11 @@ under the License. | |||
<artifactId>flink-cep</artifactId> | |||
<version>${project.version}</version> | |||
</dependency> | |||
<dependency> | |||
<groupId>org.apache.flink</groupId> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we again include the file connector in the uber-jar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR doesn't changes the uber jar building, which on master already includes flink-connector-files. The reason for this specific change is that, before this PR, flink-table-common depends on flink-connector-files, so flink-table-uber depends on flink-connector-files through flink-table-common. Now none of the packages in flink-table-* depends anymore on flink-connector-files, so we need to import it here explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also skeptical about this change. It would be better to explicitly add flink-connector-files
as a separate dependency to lib
. It must not be part of the the table jars.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also add a comment in the docs that the filesystem connector is in lib
by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also skeptical about this change
But this is no different from master. If you build flink-table-uber in master, you'll see that flink-connector-files is inside the resulting jar, because this is brough in transitively from flink-table-common. If we want to do such change, I suggest we do it in a separate PR with a separate issue as it's user facing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but when reading the ticket https://issues.apache.org/jira/browse/FLINK-24687 isn't that exactly what we want to solve? We want to exclude the flink-connector-files
from the uber jar to prevent conflicts with user jars if we keep including it we did not fix the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhm you're right, I forgot what the initial issue title was about 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pushed a commit to remove the flink-connector-files shading from the table-uber and to add the flink-connector-files to the lib folder in the distribution. I've also manually tested the new distribution output with a sample source - sink sql batch job using the sql client.
This is the distribution tree now:
!2174 ➜ tree
.
├── bin
│ ├── bash-java-utils.jar
│ ├── config.sh
│ ├── find-flink-home.sh
│ ├── flink
│ ├── flink-console.sh
│ ├── flink-daemon.sh
│ ├── historyserver.sh
│ ├── jobmanager.sh
│ ├── kubernetes-jobmanager.sh
│ ├── kubernetes-session.sh
│ ├── kubernetes-taskmanager.sh
│ ├── pyflink-shell.sh
│ ├── sql-client.sh
│ ├── standalone-job.sh
│ ├── start-cluster.sh
│ ├── start-zookeeper-quorum.sh
│ ├── stop-cluster.sh
│ ├── stop-zookeeper-quorum.sh
│ ├── taskmanager.sh
│ ├── yarn-session.sh
│ └── zookeeper.sh
├── conf
│ ├── flink-conf.yaml
│ ├── log4j-cli.properties
│ ├── log4j-console.properties
│ ├── log4j.properties
│ ├── log4j-session.properties
│ ├── logback-console.xml
│ ├── logback-session.xml
│ ├── logback.xml
│ ├── masters
│ ├── workers
│ └── zoo.cfg
├── examples
│ ├── batch
│ │ ├── ConnectedComponents.jar
│ │ ├── DistCp.jar
│ │ ├── EnumTriangles.jar
│ │ ├── KMeans.jar
│ │ ├── PageRank.jar
│ │ ├── TransitiveClosure.jar
│ │ ├── WebLogAnalysis.jar
│ │ └── WordCount.jar
│ ├── gelly
│ │ └── flink-gelly-examples_2.12-1.15-SNAPSHOT.jar
│ ├── python
│ │ ├── datastream
│ │ │ ├── event_time_timer.py
│ │ │ ├── __init__.py
│ │ │ ├── process_json_data.py
│ │ │ ├── state_access.py
│ │ │ └── word_count.py
│ │ └── table
│ │ ├── __init__.py
│ │ ├── mixing_use_of_datastream_and_table.py
│ │ ├── multi_sink.py
│ │ ├── pandas
│ │ │ ├── conversion_from_dataframe.py
│ │ │ ├── __init__.py
│ │ │ └── pandas_udaf.py
│ │ ├── process_json_data.py
│ │ ├── process_json_data_with_udf.py
│ │ ├── windowing
│ │ │ ├── __init__.py
│ │ │ ├── over_window.py
│ │ │ ├── session_window.py
│ │ │ ├── sliding_window.py
│ │ │ └── tumble_window.py
│ │ └── word_count.py
│ ├── streaming
│ │ ├── Iteration.jar
│ │ ├── SessionWindowing.jar
│ │ ├── SocketWindowWordCount.jar
│ │ ├── StateMachineExample.jar
│ │ ├── TopSpeedWindowing.jar
│ │ ├── Twitter.jar
│ │ ├── WindowJoin.jar
│ │ └── WordCount.jar
│ └── table
│ ├── AdvancedFunctionsExample.jar
│ ├── ChangelogSocketExample.jar
│ ├── GettingStartedExample.jar
│ ├── StreamSQLExample.jar
│ ├── StreamWindowSQLExample.jar
│ ├── UpdatingTopCityExample.jar
│ └── WordCountSQLExample.jar
├── lib
│ ├── flink-connector-files-1.15-SNAPSHOT.jar
│ ├── flink-csv-1.15-SNAPSHOT.jar
│ ├── flink-dist-1.15-SNAPSHOT.jar
│ ├── flink-json-1.15-SNAPSHOT.jar
│ ├── flink-scala_2.12-1.15-SNAPSHOT.jar
│ ├── flink-shaded-zookeeper-3.4.14.jar
│ ├── flink-table_2.12-1.15-SNAPSHOT.jar
│ ├── log4j-1.2-api-2.14.1.jar
│ ├── log4j-api-2.14.1.jar
│ ├── log4j-core-2.14.1.jar
│ └── log4j-slf4j-impl-2.14.1.jar
├── LICENSE
├── log
│ ├── flink-slinkydeveloper-sql-client-fedora.log
│ ├── flink-slinkydeveloper-standalonesession-0-fedora.log
│ ├── flink-slinkydeveloper-standalonesession-0-fedora.out
│ ├── flink-slinkydeveloper-taskexecutor-0-fedora.log
│ └── flink-slinkydeveloper-taskexecutor-0-fedora.out
├── opt
│ ├── flink-azure-fs-hadoop-1.15-SNAPSHOT.jar
│ ├── flink-cep-1.15-SNAPSHOT.jar
│ ├── flink-cep-scala_2.12-1.15-SNAPSHOT.jar
│ ├── flink-gelly-1.15-SNAPSHOT.jar
│ ├── flink-gelly-scala_2.12-1.15-SNAPSHOT.jar
│ ├── flink-oss-fs-hadoop-1.15-SNAPSHOT.jar
│ ├── flink-python_2.12-1.15-SNAPSHOT.jar
│ ├── flink-queryable-state-runtime-1.15-SNAPSHOT.jar
│ ├── flink-s3-fs-hadoop-1.15-SNAPSHOT.jar
│ ├── flink-s3-fs-presto-1.15-SNAPSHOT.jar
│ ├── flink-shaded-netty-tcnative-dynamic-2.0.39.Final-14.0.jar
│ ├── flink-shaded-zookeeper-3.5.9.jar
│ ├── flink-sql-client_2.12-1.15-SNAPSHOT.jar
│ ├── flink-state-processor-api-1.15-SNAPSHOT.jar
│ └── python
│ ├── cloudpickle-1.2.2-src.zip
│ ├── py4j-0.10.8.1-src.zip
│ └── pyflink.zip
├── plugins
│ ├── external-resource-gpu
│ │ ├── flink-external-resource-gpu-1.15-SNAPSHOT.jar
│ │ ├── gpu-discovery-common.sh
│ │ └── nvidia-gpu-discovery.sh
│ ├── metrics-datadog
│ │ └── flink-metrics-datadog-1.15-SNAPSHOT.jar
│ ├── metrics-graphite
│ │ └── flink-metrics-graphite-1.15-SNAPSHOT.jar
│ ├── metrics-influx
│ │ └── flink-metrics-influxdb-1.15-SNAPSHOT.jar
│ ├── metrics-jmx
│ │ └── flink-metrics-jmx-1.15-SNAPSHOT.jar
│ ├── metrics-prometheus
│ │ └── flink-metrics-prometheus-1.15-SNAPSHOT.jar
│ ├── metrics-slf4j
│ │ └── flink-metrics-slf4j-1.15-SNAPSHOT.jar
│ ├── metrics-statsd
│ │ └── flink-metrics-statsd-1.15-SNAPSHOT.jar
│ └── README.txt
└── README.txt
80e8bf5
to
6793541
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @slinkydeveloper. I added some comments. Same feedback as before: We could have split this PR into a couple of commits. E.g. the relocation of columnar classes to flink-table-common
would have been a nice commit on its own. It is very annoying for reviewers to scan single commits that contains both refactorings + crucial changes.
...ble/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
Show resolved
Hide resolved
...ble/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
Outdated
Show resolved
Hide resolved
...ble/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
Outdated
Show resolved
Hide resolved
|
||
import static org.apache.flink.api.java.io.CsvOutputFormat.DEFAULT_FIELD_DELIMITER; | ||
|
||
class TestCsvSerializationSchema implements SerializationSchema<RowData> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again: please add JavaDocs. What is this class good for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a doc in TestCsvFormatFactory
, let me link it.
...iles/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactOperator.java
Outdated
Show resolved
Hide resolved
.../src/test/java/org/apache/flink/connector/file/table/stream/compact/CompactOperatorTest.java
Outdated
Show resolved
Hide resolved
...nk-orc-nohive/src/main/java/org/apache/flink/orc/nohive/OrcNoHiveColumnarRowInputFormat.java
Outdated
Show resolved
Hide resolved
flink-table/flink-table-uber/pom.xml
Outdated
@@ -88,6 +88,11 @@ under the License. | |||
<artifactId>flink-cep</artifactId> | |||
<version>${project.version}</version> | |||
</dependency> | |||
<dependency> | |||
<groupId>org.apache.flink</groupId> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also skeptical about this change. It would be better to explicitly add flink-connector-files
as a separate dependency to lib
. It must not be part of the the table jars.
flink-table/flink-table-uber/pom.xml
Outdated
@@ -88,6 +88,11 @@ under the License. | |||
<artifactId>flink-cep</artifactId> | |||
<version>${project.version}</version> | |||
</dependency> | |||
<dependency> | |||
<groupId>org.apache.flink</groupId> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also add a comment in the docs that the filesystem connector is in lib
by default.
6793541
to
0f1e5c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update. We should also update sql_connectors.yml
to remove the Scala suffixes. Please scan the docs one more time if there is more to update.
flink-dist/pom.xml
Outdated
@@ -129,6 +129,12 @@ under the License. | |||
<!-- Default file system support. The Hadoop and MapR dependencies --> | |||
<!-- are optional, so not being added to the dist jar --> | |||
|
|||
<dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with this you add the flink-connector-files
to the flink-dist
jar, no? So it is in the release two times?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What makes you say is it in the release two times? My understanding is that in master flink-connector-files is included as part of the table uber jar, which is imported here and copied in /lib
. With this PR instead, the flink-connector-files is not part of the table uber jar but is imported directly and copied to /lib
.
I might be missing something here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also in the tree here i see it only once: #17897 (comment)
// Delete the sub interface now, so it can't be loaded | ||
Files.delete(tempDir.resolve(subInterfaceName + ".class")); | ||
|
||
assertThat(FactoryUtil.discoverFactories(classLoaderIncludingTheInterface)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice test 👍
fc4cc48
to
2217c68
Compare
Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…ies in the flink-dist to enforce the reactor order Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…ase in its uber jar Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…o tolerate NoClassDefFoundError. Added a test and converted FactoryUtil to use assertj. Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…onnectorOptions Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…imalDataUtils#is32BitDecimal in ParquetSchemaConverter to remove the dependency on DecimalDataUtils (from planner) Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…dent of planner (except ScanRuntimeProviderContext.INSTANCE::createDataStructureConverter) and to implement SerializationSchema more than BulkWriterFormatFactory. Moved to a specific package Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update. One last comment. Should be good in the next iteration. Please rebase and resolve merge conflicts.
…FileSystemTableSource, BulkDecodingFormat, FileSystemFormatFactory, BulkWriterFormatFactory and BulkReaderFormatFactory to flink-connector-files. Move columnar data types to flink-table-common. Now table packages don't depend on flink-connector-files anymore. Fix orc and parquet format to use only common classes and not planner nor runtime classes. Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…rfaces Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…k-parquet Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
…endency, which was previously brought in through flink-table-api-java-bridge -> flink-table-api-java -> flink-table-common -> flink-connector-files -> flink-connector-base. Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
fd6cc2f
to
004179a
Compare
@flinkbot run azure |
004179a
to
512fd91
Compare
…ide table-uber anymore but it's loaded in /lib in the distribution as flink-connector-files Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
Signed-off-by: slinkydeveloper <francescoguard@gmail.com>
512fd91
to
a96dbbc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…TableSource to flink-connector-files and columnar support to flink-table-common Now table packages don't depend on flink-connector-files anymore. Fix orc and parquet format to use only common classes and not planner nor runtime classes. - [connector-files] Add @internal to all public classes and interfaces - [orc][parquet][hive] Drop scala suffix from flink-orc and flink-parquet - [architecture-tests] Updated the violations file - [connector-elasticsearch-base] Add flink-connector-base as dependency, which was previously brought in through flink-table-api-java-bridge -> flink-table-api-java -> flink-table-common -> flink-connector-files -> flink-connector-base. - [orc][parquet] Add issue link for partition keys handling - [table-uber][dist] Now flink-connector-files is not shaded inside table-uber anymore but it's loaded in /lib in the distribution as flink-connector-files - [docs] Update sql_connectors.yml This closes apache#17897.
What is the purpose of the change
This PR moves the FileSystemTableSource/Sink out of flink-table-runtime and inside flink-connector-files module. This PR ended up being bigger than expected, so I tried to factor out the "preliminary changes" to the actual file moving in separate hotfix commits. More details in the commit details section.
Brief change log
Classes changes:
org.apache.flink.table.filesystem
toorg.apache.flink.connector.file.table
org.apache.flink.connector.file.table
fromflink-table-runtime
toflink-connector-files
org.apache.flink.connector.file.table
fromflink-table-runtime
toflink-connector-files
, others inflink-table-planner
. Now no filesystem related test lives inflink-table-runtime
flink-table-planner
RowData
implementations fromflink-table-runtime
toflink-table-common
, under the packageorg.apache.flink.table.data.columnar
FileSystemFormatFactory
,BulkWriterFormatFactory
andBulkReaderFormatFactory
toflink-connector-files
, in a new packageorg.apache.flink.connector.file.table.factories
BulkDecodingFormat
toflink-connector-files
, in a new packageorg.apache.flink.connector.file.table.formats
Module changes:
flink-connector-files
for production jars anymoreflink-table-planner
still depends onflink-connector-files
in test classpathflink-orc
andflink-parquet
don't depend onflink-table-runtime
anymore, hence the scala suffix is droppedflink-connector-files
to implementBulkWriterFormatFactory
andBulkReaderFormatFactory
Commit details
FactoryUtil
loading mechanism to tolerateNoClassDefFoundError
: This is necessary in scenarios when users bring in a format module, implementingBulkReaderFormatFactory
/BulkWriterFormatFactory
, but don't have flink-connector-files in the classpath because they don't use it. E.g. you're usingflink-connector-kafka
and you want to useflink-avro
, which brings in an implementation ofBulkWriterFormatFactory
. This scenario is already tested by every test in flink-connector-kafka which doesn't bring in flink-connector-files in its test classpath.FileSystemConnectorFiles
hardcoding the used optionDecimalDataUtils#is32BitDecimal
andDecimalDataUtils#is32BitDecimal
inParquetSchemaConverter
to remove the dependency onDecimalDataUtils
(which lives in the runtime module)SerializationSchemaFormatFactory
, since the implementation ofBulkReaderFormatFactory
was causing issues when using theflink-table-planner
test-jar in other modules.FileSystemTableSink#createSourceContext
, which cannot provide theDataStructureConverter
. This issue can potentially come up with other formats used byFileSystemFormatFactory
as well, so we probably need to address it separately from this PR.flink-table-planner
InternalTypeInfo
for creating theTypeInformation
. Now they're using the context.org.apache.flink.table.sinks
to remove the dependency fromflink-connector-files
types@Internal
flink-connector-files
Verifying this change
Every change is already covered by existing tests
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: changes the package name of flink-orc and flink-parquetDocumentation