Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-24765][kafka] Bump Kafka version to 2.8 #17696

Merged
merged 3 commits into from
Nov 11, 2021
Merged

Conversation

fapaul
Copy link

@fapaul fapaul commented Nov 5, 2021

What is the purpose of the change

With dropping Scala 2.11 we can now finally upgrade our Kafka dependencies which is very important to ensure the stability of our tests because we were missing a few years of Kafka fixes.

Brief change log

  • Bump Kafka version to 2.8

Verifying this change

All tests are still passing

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

flinkbot commented Nov 5, 2021

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit ebf1f24 (Fri Nov 05 12:58:51 UTC 2021)

Warnings:

  • 2 pom.xml files were touched: Check for build and licensing issues.
  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Nov 5, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@MartijnVisser MartijnVisser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick review (I'm no Java export so probably someone else should have a more thorough look). Just a couple of questions, looks good to me overall

Comment on lines +114 to +122
<!-- Required to execute the kafka server for testing. Please change the zookeeper version accordingly when changing the Kafka version
https://github.com/apache/kafka/blob/839b886f9b732b151e1faeace7303c80641c08c4/gradle/dependencies.gradle#L122 -->
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.5.9</version>
<scope>test</scope>
</dependency>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe @dmvk was looking into dropping Zookeeper 3.4 for 1.15, so would that also solve this issue then?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Kafka needs a specific Zookeeper version then it is better to specify that in the pom; otherwise this can easily break if we change the Zookeeper version.

Whether we drop ZK 3.4 or not is largely independent from this in any case, because that primarily applies to flink-shaded-zookeeper.

It would actually be interesting to know which modules actually rely on vanilla zookeeper; maybe the root pom entry isn't that useful anymore.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I am not sure whether other versions suffice here. I just know that the kafka-server no longer has this dependency therefore we have to include it. I guess other zookeeper versions will also work but this is the one currently included by Kafka therefore I left the link to check the corresponding version.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MartijnVisser for more context on the zookeeper bump - https://issues.apache.org/jira/browse/FLINK-24707

@@ -33,7 +33,7 @@
public static final String ELASTICSEARCH_6 =
"docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.1";

public static final String KAFKA = "confluentinc/cp-kafka:5.5.2";
public static final String KAFKA = "confluentinc/cp-kafka:6.2.1";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've already run into this in the past, would it be possible to unify these (a single constant shared between these tests)?

Comment on lines -157 to -170
@Override
public void close(long timeout, TimeUnit unit) {
synchronized (producerClosingLock) {
kafkaProducer.close(timeout, unit);
if (LOG.isDebugEnabled()) {
LOG.debug(
"Closed internal KafkaProducer {}. Stacktrace: {}",
System.identityHashCode(this),
Joiner.on("\n").join(Thread.currentThread().getStackTrace()));
}
closed = true;
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need this anymore or is this no longer possible?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method is no longer part of the interface therefore we cannot implement it anymore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was it dead code before? (looks to me like it)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should be it dead code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry, I looked at the wrong commit.

Copy link
Contributor

@ijuma ijuma Jan 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the hint during the upgrade test I also tested with 3.0.0. Probably I missed that when going back to 2.8.

Fabian Paul added 3 commits November 8, 2021 14:16
By bumping the kafka version to the latest 2.x release we include the
latest patches of Kafka which should harden our tests.
Copy link
Contributor

@MartijnVisser MartijnVisser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good to me! Thank you

Comment on lines +59 to +60
* <p>Do not run this class in the same junit execution with other tests in your IDE. This may lead
* leaking threads.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an issue because tests may be run in parallel in the same JVM of the IDE and thus the leak detection looks at the threads of another test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we see more tests than that, we could also introduce an annotation later.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an issue because tests may be run in parallel in the same JVM of the IDE and thus the leak detection looks at the threads of another test?

Are tests within the same module by default run in parallel? Perhaps there is also another test within the module leaking thread that is executed before the FlinkKafkaProducerITCase ...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we see more tests than that, we could also introduce an annotation later.

Good idea, hopefully, this case does not become more often

@AHeise AHeise merged commit e61ff88 into apache:master Nov 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants