New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flink: Add support for Flink 1.18 #8930
Comments
I am concerned about the dependency tree of the flink/v1.17 module. If the flink-common depends on Flink 1.18, and the flink/v1.17 depends on flink-common, then we will have a transitive dependency on Flink 1.18. We can exclude the dependency, but how can we make sure that everything works as expected without running the tests. |
You're right. There'll be a transitive dependency. If Flink1.17's APIs are incompatible with Flink1.18, I think we can copy some tests into the Flink1.17 module to ensure it works fine and add end-to-end tests to guarantee compatibility with different Flink versions
As of Flink 1.18, released a few days ago, the Flink community has externalized the JDBC, ES, Kafka, pulsar, HBase, and so on connectors. That means the Flink API has become more compatible. So I think this maintenance cost between different Flink versions is acceptable. |
The list supported Flink versions for each connector: |
I think the issue here, is that we can not be sure, which dependency needs to be copied over in advance. I tried to upgrade to Flink 1.18 yesterday, and found that there is a Flink change which causes the TestFlinkMetaDataTable#testAllFilesPartitioned test to fail. See: https://apache-flink.slack.com/archives/C03G7LJTS2G/p1698402761715879?thread_ts=1698402761.715879&cid=C03G7LJTS2G One can argue that these issues are found at upgrade time, but if we add features between migration, then we will not find them, only if we run all of the tests. So I think running all of the tests on every supported Flink version is a must.
I am currently working on extending the Flink Sink API to provide features needed for Iceberg. See:
I would like to use them ASAP, and this will cause differences between 1.18/1.19 versions of the Iceberg connector. If we have flink-common, then until 1.18 this should be in common, after 1.19 it should go to the version specific code, and after we drop support for 1.18 then it will go to the common code again. Also we would need to come up with the appropriate abstractions for separate the changing code from the fix code. These issues might be even more pronounced when the Flink 2.0 comes out, of which the discussion is already started. IMHO these are more complex tasks than simply backporting the changes to the other branches. Maybe that is the cause why the Iceberg-Spark code is handled in the same way. |
Quick check on any progress or plan/timeline for this request? |
I have 2 PRs in progress, which I would like to merge first (#8803 and #8553). If nobody starts working on this until they are merged, then I will move forward with this one. I plan to use the original method of upgrading (independent path for every Flink version). I would like to see the 1.18 available before the end of the year - but hopefully sooner |
The failing test TestFlinkMetaDataTable#testAllFilesPartitioned is due to this a Flink side issue which i reported here - https://issues.apache.org/jira/browse/FLINK-33523. |
Flink1.17 used to allow conversion of collection (List) data types The failing iceberg unit test needs to be fixed to handle this case. I have patch which fixes the same. Let me know when we start working on this, i can share the patch for the unit test failure.
|
Thanks @PrabhuJoseph for the investigation! |
As #9211 is merged, we can close this. |
Feature Request / Improvement
Feature Request:
Recently Flink 1.18 was released.
https://nightlies.apache.org/flink/flink-docs-release-1.18/release-notes/flink-1.18/
What to do?
The Flink community has been working hard to improve the stability of the connector interface. This makes it easier to abstract and maintain the Flink common connector between different Flink versions. Similar efforts have been made in other open-source communities, such as Paimon (Apache Incubating) and Amoro.
The expected flink modules would be:
/flink
|---/v1.16
|------/flink
|------/flink-runtime
|---/v1.17
|------/flink
|------/flink-runtime
|---/v1.18
|------/flink
|------/flink-runtime
|---/flink-common #dependence the newest flink version 1.18
This issue only concerns the flink-common abstraction in the flink1.18 version, 1.17 and other flink connector refactor work would be done in other issues.
What's the benefit?
WDYT?
Query engine
Flink
The text was updated successfully, but these errors were encountered: