-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-7811] Add support for Scala 2.12 #6894
[FLINK-7811] Add support for Scala 2.12 #6894
Conversation
…eCleaner This updates the ClosureCleaner with recent changes from SPARK-14540.
Some places in the code didn't have override, this wasn't a problem for Scala 2.11 but Scala 2.12 seems to be more strict.
This wasn't a problem for Scala 2.11 but Scala 2.12 seems to be more strict.
…pport Scala 2.12 doesn't have ForkJoinPool there anymore.
It seems this wasn't a problem with Scala 2.11 but with 2.12 we go into infinite recursion. The fix should still be good for 2.11.
Our previous grizzled dependency is too old for Scala 2.12
Our previous dependency was too old.
The new Scala 2.12 typechecker didn't like this.
These changes are required for 2.12 support and also work on Scala 2.11.
It seems \$ doesn't work but $$ does. I noticed this when changing DataStream.scala.
175b4d7
to
44b9080
Compare
$MVN clean deploy -Prelease,docs-and-source -DskipTests -DretryFailedDeploymentCount=10 | ||
|
||
echo "Deploying Scala 2.12 version" | ||
./change-scala-version.sh 2.12 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact that the side-effects of this call aren't visible in the git history concerns me a bit.
This effectively means that the release commit does not correspond to the deployed artifacts nor binary releases. This brings us into really murky territory that I'd very much like to avoid, as it raises the question how to determine (and verify!) what changes have been made to the artifacts/binaries since the release commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree! We have had the same problem when we did Scala 2.10 releases and 2.11 releases, I think. And to a degree we have it with the different Hadoop versions. We already do many different releases from the same commit.
I can introduce a profile for Scala 2.12 that changes the version, but that hasn't worked well in the past and there were always problems with the IDE setup, that's why I did it like this.
I'm very open to suggestions here. 😅
pom.xml
Outdated
<configuration> | ||
<name>maven.test.skip</name> | ||
<value>${project.artifactId}</value> | ||
<regex>(flink-connector-kafka-0.9.*)|(flink-connector-kafka-0.8.*)|(flink-scala-shell.*)</regex> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we not hiding these modules entirely when using scala 2.12? (or in maven logic: why are we not including them only when not running with scala 2.12?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kafka 0.9 and the scala shell theoretically work with Scala 2.12. For Kafka 0.9 we can't run tests because there is no kafka_2.12
dependency for 0.9. Some Scala Shell tests pass and some don't but some stuff seems to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kafka 0.8 can be removed here because we actually exclude it from the build when building for Scala 2.12.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly does theoretically mean here? Do i understand it correctly that the connector works but the test infrastructure doesn't? Are the scala-shell tests failing because the test infrastructure is broken, or because the shell is in a half-broken state now? (in the latter case I'd exclude it completely)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll exclude the scala shell completely. And yes, for Kafka 0.9 the connector should work but testing doesn't work because the test dependency is not available for Scala 2.12
@@ -45,22 +45,19 @@ under the License. | |||
|
|||
<dependency> | |||
<groupId>org.apache.flink</groupId> | |||
<!-- use a dedicated Scala version to not depend on it --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same situation as flink-avro
@@ -135,6 +129,7 @@ under the License. | |||
|
|||
<build> | |||
<plugins> | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
...rc/test/scala/org/apache/flink/api/scala/migration/StatefulJobSavepointMigrationITCase.scala
Show resolved
Hide resolved
...t/scala/org/apache/flink/api/scala/migration/StatefulJobWBroadcastStateMigrationITCase.scala
Show resolved
Hide resolved
44b9080
to
616cee9
Compare
Previously we only allowed a lambda here, which was an omission. This also adds support for KeySelector on other operations that need a key.
This is needed for Scala 2.12 compatibility
… 0.7.6 We have to do this in order to be able to update our Chill dependency without changing the the Kryo serializers that are registered by default. The problem is that snapshots of our Kryo serializer only contain the user-registered serializers, not the serializers that are registered by default by Chill. If we had that, we could probably get by without this change. The reason we have to update is that there is no Chill 0.7.4 dependency for Scala 2.12.
This also removes old 2.10 savepoints.
…n Scala 2.12 They didn't have a SerialVersionUID before and it seems Scala 2.12 assigns different UIDs from Scala 2.11 so we have to fix them to those that Scala 2.11 assigned automatically.
It seems that the SerialVersionUID of this changed between 2.11 and 2.12. We need to ignore it so that we can restore savepoints taken on a 2.11 build with a 2.12 build.
…ound() With Scala 2.12, the ClosureCleaner will complain that the filter function that uses withinBound() is not serializable. The reason is that when "Bound" is not final it will serialize it with the closure, wich includes IterateExample.
616cee9
to
f1694ce
Compare
@zentol I think I addressed all your comments except the json, avro, and sql-client comments. |
This changes some modules that had a _2.11 dependency but didn't expose it in their module name to instead depend on the ${scala.binary.version} dependency. The main reason for this is to make the build self contained, before, with the hard-dependency on 2.11, when buildig for 2.12 it would not be clear where the dependency would come from because it is not created as part of the build. This could lead to inconsistencies. For example, when adding a new class in flink-runtime but not recompiling on 2.11 but only on 2.12, the 2.12 tests would fail when using that new class because they would use 2.11 dependencies that weren't rebuilt with the new class. We also don't build flink-scala-shell and flink-connector-kafka-0.8 because they don't work with Scala 2.12. This also includes $PROFILE in dependency convergence check script. This is in preparation for building with "-Dscala-212", where we hace to exclude certain things. We also exclude Kafka 0.8 from stages when building for Scala 2.12 And add Scala 2.12 to the release scripts
f1694ce
to
d25778a
Compare
echo "Deploying Scala 2.11 version" | ||
$MVN clean deploy -Prelease,docs-and-source -DskipTests -DretryFailedDeploymentCount=10 | ||
$MVN clean deploy $COMMON_OPTIONS -Pscala-2.11 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't work as you cannot specify the -P
flag multiple times. One of the -P
arguments will be completely ignored (can't remember which)
This is a perfect example why property-based activations are so useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think using multiple -P
flags works, I checked it with mvn help:active-profiles -Pscala-2.12 -Pscala-11
(which is a stupid example but works).
Nevertheless, I'll change to profile activation so that we can have a default profile. However, this will only work well as long as we only have two scala versions because I can use !scala-2.12
and scala-2.12
.
@@ -613,6 +625,106 @@ under the License. | |||
|
|||
<profiles> | |||
|
|||
<profile> | |||
<id>scala-2.11</id> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing activation properties; in the current form one always has to explicitly activate one scala profile or the build will simply fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above.
This is a version of #6784 that does all the work required for 2.12 but doesn't change the build infrastructure (Travis) to do a Scala 2.12 build.