Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-13399 towards scala3 #11432

Closed
wants to merge 5 commits into from

Conversation

jlprat
Copy link
Contributor

@jlprat jlprat commented Oct 25, 2021

This PR takes all changes in #11350 that are needed for a Scala 3 compatibility except those with an open bug report or already solved one.

The PR is separated in 5 commits, one for each type of change performed:

  • Avoid Shadowing: Scala 3 compiler is not so friendly with shadowing and reports errors where Scala 2 wasn't.
  • Not rely on automatic widening in numeric types: Scala 3 doesn't automatically convert Short to Int or Int to Long at will as in Scala 2. Changes in here help the typer by manually forcing the conversions.
  • Remove redundant parenthesis: Scala 2 was more lax about calling with empty parenthesis a method without parenthesis. This became stricter in Scala 3.
  • Explicit type declaration: Scala 3 changed a bit in the area of type inference and in some cases it picks the most general type, causing in our case some trouble as this is not public but package protected.
    In other cases, the typer wasn't able to infer the proper one, forcing it to be manually set.
  • Workaround for SAM conversion with overloading: This is a reported bug that unfortunately can't be fixed easily without breakage on Scala's side. For further information check Code not compiling when Unit-returning SAM and overloaded methods involved scala/scala3#13549

The resulting code is still Scala 2 valid code and arguably more correct.

Instead of doing all the changes in an enormous PR with all the changes at once, we can already perform the changes we know for a fact that are going to be needed for Scala 3.

If desired I can provide 1 PR per commit.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

Copy link
Contributor Author

@jlprat jlprat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some comments to the changes to help with the review process.

@@ -22,7 +22,7 @@ import java.util.{Collections, Properties}
import joptsimple._
import kafka.common.AdminCommandFailedException
import kafka.log.LogConfig
import kafka.utils._
import kafka.utils.{immutable=> _, _}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes like this one solve a collision between the immutable package in Scala collections and kafka.utils.immutable.

@@ -42,11 +42,11 @@ final class KafkaMetadataLog private (
// Access to this object needs to be synchronized because it is used by the snapshotting thread to notify the
// polling thread when snapshots are created. This object is also used to store any opened snapshot reader.
snapshots: mutable.TreeMap[OffsetAndEpoch, Option[FileRawSnapshotReader]],
topicPartition: TopicPartition,
topicPartitionArg: TopicPartition,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoids a collision between the argument name and the method with the same name.

@@ -27,7 +27,7 @@ import com.typesafe.scalalogging.LazyLogging
import joptsimple._
import kafka.utils.Implicits._
import kafka.utils.{Exit, _}
import org.apache.kafka.clients.consumer.{Consumer, ConsumerConfig, ConsumerRecord, KafkaConsumer}
import org.apache.kafka.clients.consumer.{Consumer, ConsumerConfig => ClientConsumerConfig, ConsumerRecord, KafkaConsumer}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This avoids a name clashing between the class ConsumerConfig under o.a.k.c.c. with the one defined in the same class.

@@ -27,7 +27,7 @@ import kafka.message._
import kafka.utils.Implicits._
import kafka.utils.{CommandDefaultOptions, CommandLineUtils, Exit, ToolsUtils}
import org.apache.kafka.clients.producer.internals.ErrorLoggingCallback
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig => ClientProducerConfig, ProducerRecord}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This avoids a name clashing between the class ProducerConfig under o.a.k.c.p. with the one defined in the same class.

Comment on lines +241 to +248
opts.options.valueOf(opts.interBrokerThrottleOpt).longValue(),
opts.options.valueOf(opts.replicaAlterLogDirsThrottleOpt).longValue(),
opts.options.valueOf(opts.timeoutOpt).longValue())
} else if (opts.options.has(opts.cancelOpt)) {
cancelAssignment(adminClient,
Utils.readFileAsString(opts.options.valueOf(opts.reassignmentJsonFileOpt)),
opts.options.has(opts.preserveThrottlesOpt),
opts.options.valueOf(opts.timeoutOpt))
opts.options.valueOf(opts.timeoutOpt).longValue())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those numbers were all Ints converted automatically to Long.

@@ -353,7 +353,7 @@ class LogManager(logDirs: Seq[File],
s"$logDirAbsolutePath, resetting to the base offset of the first segment", e)
}

val logsToLoad = Option(dir.listFiles).getOrElse(Array.empty).filter(logDir =>
val logsToLoad = Option(dir.listFiles).getOrElse(Array.empty[File]).filter(logDir =>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scala 3 wasn't able to infer that Array.empty parametrized type should be File.

@@ -85,13 +85,13 @@ object DecodeJson {
else decodeJson.decodeEither(node).map(Some(_))
}

implicit def decodeSeq[E, S[+T] <: Seq[E]](implicit decodeJson: DecodeJson[E], factory: Factory[E, S[E]]): DecodeJson[S[E]] = (node: JsonNode) => {
implicit def decodeSeq[E, S[E] <: Seq[E]](implicit decodeJson: DecodeJson[E], factory: Factory[E, S[E]]): DecodeJson[S[E]] = (node: JsonNode) => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only way I could have it work. It seems this definition should have always been like this, but maybe I'm missing a use case.
The previous definition was never met in Scala 3.

if (node.isArray)
decodeIterator(node.elements.asScala)(decodeJson.decodeEither)
else Left(s"Expected JSON array, received $node")
}

implicit def decodeMap[V, M[K, +V] <: Map[K, V]](implicit decodeJson: DecodeJson[V], factory: Factory[(String, V), M[String, V]]): DecodeJson[M[String, V]] = (node: JsonNode) => {
implicit def decodeMap[V, M[K, V] <: Map[K, V]](implicit decodeJson: DecodeJson[V], factory: Factory[(String, V), M[String, V]]): DecodeJson[M[String, V]] = (node: JsonNode) => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this just in case, but there is no test code using this.

@@ -239,10 +239,11 @@ class FinalizedFeatureChangeListenerTest extends ZooKeeperTestHarness {

val exitLatch = new CountDownLatch(1)
Exit.setExitProcedure((_, _) => exitLatch.countDown())
val feature1: SupportedVersionRange = brokerFeatures.supportedFeatures.get("feature_1")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This intermediate variable is needed because Scala 3 typer was inferring it to be BaseVersionRange which is package protected for org.apache.kafka.common.feature.

@@ -59,7 +59,7 @@ class IsrExpirationTest {
@BeforeEach
def setUp(): Unit = {
val logManager: LogManager = EasyMock.createMock(classOf[LogManager])
EasyMock.expect(logManager.liveLogDirs).andReturn(Array.empty[File]).anyTimes()
EasyMock.expect(logManager.liveLogDirs).andReturn(Seq.empty[File]).anyTimes()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Method is expected to return a Seq instead of Array. Previous code wasn't working in Scala 3.

@jlprat
Copy link
Contributor Author

jlprat commented Oct 25, 2021

cc. @ijuma if you have some time to review this I would be thankful!

These are the changes I mentioned in the mailing list in regards to the Scala 3 migration. These are only the ones that are for sure needed. Other changes that were present in #11350 but not here might not be needed once we migrate to Scala 3 as they are either already solved and will be in future versions of Scala, or being worked on.

@jlprat
Copy link
Contributor Author

jlprat commented Oct 25, 2021

Test failure was https://issues.apache.org/jira/browse/KAFKA-4184

@jlprat
Copy link
Contributor Author

jlprat commented Nov 19, 2021

Error was a flaky test: https://issues.apache.org/jira/browse/KAFKA-8785

@jlprat
Copy link
Contributor Author

jlprat commented Nov 23, 2021

Hi @cmccabe and @ijuma, I would highly appreciate your feedback on this PR.
Thanks in advance!

@SethTisue
Copy link

SethTisue commented Feb 18, 2022

As an aside: once this lands it would be great to get Kafka in the Scala 3 community build. (Among other reasons, because there is a lot of Java code in Kafka, and the Java-consumes-Scala scenario tends to be under-tested in both the 2 and 3 community builds.)

I think VirtusLab hopes to support other build tools besides sbt there eventually, but if that hasn't happened, https://github.com/ennru/kafka/tree/build-with-sbt could probably be brought up to date — that's what we use to include Kafka in the Scala 2 community build.

@jlprat
Copy link
Contributor Author

jlprat commented Feb 18, 2022

Hi @SethTisue thanks! I definitely will take a look at it!

@jlprat jlprat mentioned this pull request Apr 5, 2022
3 tasks
@jlprat
Copy link
Contributor Author

jlprat commented Apr 5, 2022

I'll resolve the conflicts tomorrow and ping people for reviews

Scala 3 compiler is not so friendly with shadowing and reports errors where Scala 2 wasn't.
Scala 3 doesn't automatically convert `Short` to `Int` or `Int` to `Long` at will as in Scala 2. Changes in here help the typer by manually forcing the conversions.
Scala 2 was more lax about calling with empty parenthesis a method without parenthesis. This became stricter in Scala 3.
Scala 3 changed a bit in the area of type inference and in some cases it picks the most general type, causing in our case some trouble as this is not public but package protected.
In other cases, the typer wasn't able to infer the proper one, forcing it to be manually set.
This is a reported bug that unfortunately can't be fixed easily without breakage on Scala's side. For further information check scala/scala3#13549
@jlprat jlprat force-pushed the KAFKA-13399-towards-scala3 branch from 0f16f24 to ddd25c6 Compare April 6, 2022 08:45
@jlprat
Copy link
Contributor Author

jlprat commented Apr 6, 2022

At least one of the failures was: https://issues.apache.org/jira/browse/KAFKA-12319

@jlprat
Copy link
Contributor Author

jlprat commented Apr 6, 2022

@ijuma @cadonna do you think it makes sense to review this PR?

@andreisilviudragnea
Copy link

@mumrah Could a maintainer help us with a review?

@jlprat
Copy link
Contributor Author

jlprat commented Jun 1, 2023

Closing this. This PR is obsolete now as the core module is being ported to Java and on its own modules

@jlprat jlprat closed this Jun 1, 2023
@antosha417
Copy link

Hey @jlprat! How you doing?

Do you think it is possible to build and release just kafka-streams-scala for scala3?
I want to switch couple of my projects to scala 3 and I'm held back only by kafka-streams-scala library.

As I understand it is a wrapper over java api. And core module migrations might not be needed to move this one to scala 3.

@jlprat
Copy link
Contributor Author

jlprat commented Jul 5, 2023

Hi @antosha417 thanks for the interest.
The earliest point in time Kafka would support Scala 3 (and I'm not saying it will) would be in Kafka 4.0.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants