Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-10787: Update spotless version and remove support JDK8 #16176

Merged
merged 18 commits into from
Jun 5, 2024

Conversation

gongxuanzhang
Copy link
Contributor

This PR linked #16097.
In order to compatibility with JDK8 we use spotless version 6.13.0, but the version don't support JDK21.
After 6.23.3 support JDK21,but don't support JDK8.

spotless depend on google java format and hard coded in project.
We have to give in support JDK8.

refer:

So i update version and add comment

Copy link
Contributor

@chia7712 chia7712 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gongxuanzhang thanks for this patch.

build.gradle Show resolved Hide resolved
@@ -799,7 +800,7 @@ subprojects {
skipConfigurations = [ "zinc" ]
}

if (project.name in spotlessApplyModules) {
if (JavaVersion.current().isJava11Compatible() && (project.name in spotlessApplyModules)) {
apply plugin: 'com.diffplug.spotless'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another idea: Could we revers the spotlessApplyModules to be the "exclude list"? For example:

def excludedSpotlessModules = ['core', 'client']

and then we remove module one by one if we apply the spotless rule. It is more clear then before, since we can "see" which module is not applied. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea.

@gongxuanzhang
Copy link
Contributor Author

@chia7712 plz take a look

build.gradle Outdated
@@ -47,9 +47,11 @@ plugins {
// Updating the shadow plugin version to 8.1.1 causes issue with signing and publishing the shadowed
// artifacts - see https://github.com/johnrengelman/shadow/issues/901
id 'com.github.johnrengelman.shadow' version '8.1.0' apply false
// the minimum required JRE of 6.14.0+ is 11
// the spotless before 6.14.0 support JDK8, after 6.23.3 support JDK21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spotless 6.13.0 has issue with Java 21 (see https://github.com/diffplug/spotless/pull/1920), and Spotless 6.14.0+ requires JRE 11

We are going to drop JDK8 support. Hence, the spotless is upgrade to newest version and be applied only if the build env is compatible with JDK 11.

WDYT?

@gongxuanzhang
Copy link
Contributor Author

It looks good this time.

README.md Outdated
@@ -311,3 +311,5 @@ Apache Kafka is interested in building the community; we would welcome any thoug

To contribute follow the instructions here:
* https://kafka.apache.org/contributing.html

Make sure you pass it before you submit. checkstyle and spotless (require JDK 11+)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we don't need this one. There are Checkstyle and Spotless sections.

README.md Outdated
@@ -233,7 +233,7 @@ The checkstyle warnings will be found in `reports/checkstyle/reports/main.html`
subproject build directories. They are also printed to the console. The build will fail if Checkstyle fails.

#### Spotless ####
The import order is a part of static check. please call `spotlessApply` to optimize the imports of Java codes before filing pull request :
The import order is a part of static check. please call `spotlessApply` (require JDK 11+) to optimize the imports of Java codes before filing pull request, if Java env incompatible JDK11 the task will be skipped.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the addendum is a bit verbose since you have mentioned "require JDK 11+"

// Spotless 6.13.0 has issue with Java 21 (see https://github.com/diffplug/spotless/pull/1920), and Spotless 6.14.0+ requires JRE 11
// We are going to drop JDK8 support. Hence, the spotless is upgrade to newest version and be applied only if the build env is compatible with JDK 11.
// spotless 6.25.0 has issue in runtime with JDK8 even through we define it with `apply:false`. see https://github.com/diffplug/spotless/issues/2156 for more details
id 'com.diffplug.spotless' version "6.14.0" apply false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give a try for 6.24.0?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if 6.14.0 is the latest version which can work with JDK8 (as you described in the comment), please change the "spotless 6.25.0" to "spotless 6.15+"

README.md Outdated
@@ -224,7 +224,7 @@ If needed, you can specify the Scala version with `-PscalaVersion=2.13`.
There are two code quality analysis tools that we regularly run, spotbugs and checkstyle.

#### Checkstyle ####
Checkstyle enforces a consistent coding style in Kafka.
Checkstyle enforces a consistent coding style in Kafka. Make sure you pass it before you submit.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remind developers that spotlessCheck will fail if env is JDK8. Make sure you pass it before you submit. is a bit verbose since we have QA :)

@@ -200,7 +201,60 @@ def determineCommitId() {
}
}

def spotlessApplyModules = ['']
def excludedSpotlessModules = [':clients',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noticed that #16198 will add a new module. Maybe you can include that in this PR even though #16198 is not merged yet?

// Spotless 6.13.0 has issue with Java 21 (see https://github.com/diffplug/spotless/pull/1920), and Spotless 6.14.0+ requires JRE 11
// We are going to drop JDK8 support. Hence, the spotless is upgrade to newest version and be applied only if the build env is compatible with JDK 11.
// spotless 6.15.0+ has issue in runtime with JDK8 even through we define it with `apply:false`. see https://github.com/diffplug/spotless/issues/2156 for more details
id 'com.diffplug.spotless' version "6.14.0" apply false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
id 'com.diffplug.spotless' version "6.14.0" apply false
id 'com.diffplug.spotless' version "6.25.0" apply false

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can always use the latest Spotless and bump your JRE to match it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Goooler thanks for your comment.

Kafka still support JDK 8, so we make CI run checks/tests with JDK8. However, as @gongxuanzhang described, spotless 6.15+ can't work with JDK8 even though we set "apply false".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want the binary compatibility to keep on Java 8, just declare sourceCompatibility and so on.

See also https://jakewharton.com/gradle-toolchains-are-rarely-a-good-idea/

Copy link
Contributor

@chia7712 chia7712 Jun 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, you are right. We have set the sourceCompatibility=8

https://github.com/apache/kafka/blob/trunk/build.gradle#L57
https://github.com/apache/kafka/blob/trunk/build.gradle#L318

I know that we can use JDK11 to build kafak which is compatible with JRE8. However, our rules is "we ought to run all checks/tests with all supported JDKs". That is why we need to make sure spotless can work with all supported JDKs.

for another, we will drop JDK8 in next release. Hence, we can remove this workaround and upgrade spotless to newest version later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to see.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Goooler Do you approve this PR? it would be great to get approve from spotless core developer :)

Copy link
Member

@Goooler Goooler Jun 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay.

I think it would be great to throw exceptions if the current JRE is not Java 11 compatible. See #16176 (comment).

Either way, we can do that after dropping JDK8 as you described.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be great to throw exceptions if the current JRE is not Java 11 compatible.

I had similar idea before (see #13311 (comment)) . However, that means our developers can't use JDK8 to write code for kafka. Also, we can't follow our JDK rules - run all checks/tests with all supported JDKs


if (project.path in spotlessApplyModules) {
if(JavaVersion.current().isJava11Compatible() && project.path !in excludedSpotlessModules) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can just throw exceptions if the current JRE is not Java 11 compatible.

@gongxuanzhang
Copy link
Contributor Author

Failed test

JDK 21 and Scala 2.13

test jira
testBrokerHeartbeatDuringMigration https://issues.apache.org/jira/browse/KAFKA-15963
testTaskRequestWithOldStartMsGetsUpdated https://issues.apache.org/jira/browse/KAFKA-16136

JDK 8 and Scala 2.12

test jira
testReplicateSourceDefault https://issues.apache.org/jira/browse/KAFKA-15787
testReplicateSourceDefault https://issues.apache.org/jira/browse/KAFKA-15927
testSyncTopicConfigs https://issues.apache.org/jira/browse/KAFKA-15945
testSyncTopicConfigs https://issues.apache.org/jira/browse/KAFKA-14971
testTaskRequestWithOldStartMsGetsUpdated https://issues.apache.org/jira/browse/KAFKA-15760

Copy link
Contributor

@chia7712 chia7712 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chia7712 chia7712 merged commit 62e5cce into apache:trunk Jun 5, 2024
1 check failed
Copy link
Member

@Goooler Goooler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

apourchet added a commit to apourchet/kafka that referenced this pull request Jun 6, 2024
commit ee834d9
Author: Antoine Pourchet <antoine@responsive.dev>
Date:   Thu Jun 6 15:20:48 2024 -0600

    KAFKA-15045: (KIP-924 pt. 19) Update to new AssignmentConfigs (apache#16219)

    This PR updates all of the streams task assignment code to use the new AssignmentConfigs public class.

    Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>

commit 8a2bc3a
Author: Bruno Cadonna <cadonna@apache.org>
Date:   Thu Jun 6 21:19:52 2024 +0200

    KAFKA-16903: Consider produce error of different task (apache#16222)

    A task does not know anything about a produce error thrown
    by a different task. That might lead to a InvalidTxnStateException
    when a task attempts to do a transactional operation on a producer
    that failed due to a different task.

    This commit stores the produce exception in the streams producer
    on completion of a send instead of the record collector since the
    record collector is on task level whereas the stream producer
    is on stream thread level. Since all tasks use the same streams
    producer the error should be correctly propagated across tasks
    of the same stream thread.

    For EOS alpha, this commit does not change anything because
    each task uses its own producer. The send error is still
    on task level but so is also the transaction.

    Reviewers: Matthias J. Sax <matthias@confluent.io>

commit 7d832cf
Author: David Jacot <djacot@confluent.io>
Date:   Thu Jun 6 21:19:20 2024 +0200

    KAFKA-14701; Move `PartitionAssignor` to new `group-coordinator-api` module (apache#16198)

    This patch moves the `PartitionAssignor` interface and all the related classes to a newly created `group-coordinator/api` module, following the pattern used by the storage and tools modules.

    Reviewers: Ritika Reddy <rreddy@confluent.io>, Jeff Kim <jeff.kim@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>

commit 79ea7d6
Author: Mickael Maison <mimaison@users.noreply.github.com>
Date:   Thu Jun 6 20:28:39 2024 +0200

    MINOR: Various cleanups in clients (apache#16193)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit a41f7a4
Author: Murali Basani <muralidhar.basani@aiven.io>
Date:   Thu Jun 6 18:06:25 2024 +0200

    KAFKA-16884 Refactor RemoteLogManagerConfig with AbstractConfig (apache#16199)

    Reviewers: Greg Harris <gharris1727@gmail.com>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>

commit 0ed104c
Author: Kamal Chandraprakash <kchandraprakash@uber.com>
Date:   Thu Jun 6 21:26:08 2024 +0530

    MINOR: Cleanup the storage module unit tests (apache#16202)

    - Use SystemTime instead of MockTime when time is not mocked
    - Use static assertions to reduce the line length
    - Fold the lines if it exceeds the limit
    - rename tp0 to tpId0 when it refers to TopicIdPartition

    Reviewers: Kuan-Po (Cooper) Tseng <brandboat@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>

commit f36a873
Author: Cy <yimck@uci.edu>
Date:   Thu Jun 6 08:46:49 2024 -0700

    MINOR: Added test for ClusterConfig#displayTags (apache#16110)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit 226f3c5
Author: Sanskar Jhajharia <122860866+sjhajharia@users.noreply.github.com>
Date:   Thu Jun 6 18:48:23 2024 +0530

    MINOR: Code cleanup in metadata module (apache#16065)

    Reviewers: Mickael Maison <mickael.maison@gmail.com>

commit ebe1e96
Author: Loïc GREFFIER <loic.greffier@michelin.com>
Date:   Thu Jun 6 13:40:31 2024 +0200

    KAFKA-16448: Add ProcessingExceptionHandler interface and implementations (apache#16187)

    This PR is part of KAFKA-16448 which aims to bring a ProcessingExceptionHandler to Kafka Streams in order to deal with exceptions that occur during processing.

    This PR brings ProcessingExceptionHandler interface and default implementations.

    Co-authored-by: Dabz <d.gasparina@gmail.com>
    Co-authored-by: sebastienviale <sebastien.viale@michelin.com>

    Reviewer: Bruno Cadonna <cadonna@apache.org>

commit b74b182
Author: Lianet Magrans <98415067+lianetm@users.noreply.github.com>
Date:   Thu Jun 6 09:45:36 2024 +0200

    KAFKA-16786: Remove old assignment strategy usage in new consumer (apache#16214)

    Remove usage of the partition.assignment.strategy config in the new consumer. This config is deprecated with the new consumer protocol, so the AsyncKafkaConsumer should not use or validate the property.

    Reviewers: Lucas Brutschy <lbrutschy@confluent.io>

commit f880ad6
Author: Alyssa Huang <ahuang@confluent.io>
Date:   Wed Jun 5 23:30:49 2024 -0700

    KAFKA-16530: Fix high-watermark calculation to not assume the leader is in the voter set (apache#16079)

    1. Changing log message from error to info - We may expect the HW calculation to give us a smaller result than the current HW in the case of quorum reconfiguration. We will continue to not allow the HW to actually decrease.
    2. Logic for finding the updated LeaderEndOffset for updateReplicaState is changed as well. We do not assume the leader is in the voter set and check the observer states as well.
    3. updateLocalState now accepts an additional "lastVoterSet" param which allows us to update the leader state with the last known voters. any nodes in this set but not in voterStates will be added to voterStates and removed from observerStates, any nodes not in this set but in voterStates will be removed from voterStates and added to observerStates

    Reviewers: Luke Chen <showuon@gmail.com>, José Armando García Sancio <jsancio@apache.org>

commit 3835515
Author: Okada Haruki <ocadaruma@gmail.com>
Date:   Thu Jun 6 15:10:13 2024 +0900

    KAFKA-16541 Fix potential leader-epoch checkpoint file corruption (apache#15993)

    A patch for KAFKA-15046 got rid of fsync on LeaderEpochFileCache#truncateFromStart/End for performance reason, but it turned out this could cause corrupted leader-epoch checkpoint file on ungraceful OS shutdown, i.e. OS shuts down in the middle when kernel is writing dirty pages back to the device.

    To address this problem, this PR makes below changes: (1) Revert LeaderEpochCheckpoint#write to always fsync
    (2) truncateFromStart/End now call LeaderEpochCheckpoint#write asynchronously on scheduler thread
    (3) UnifiedLog#maybeCreateLeaderEpochCache now loads epoch entries from checkpoint file only when current cache is absent

    Reviewers: Jun Rao <junrao@gmail.com>

commit 7763243
Author: Florin Akermann <florin.akermann@gmail.com>
Date:   Thu Jun 6 00:22:31 2024 +0200

    KAFKA-12317: Update FK-left-join documentation (apache#15689)

    FK left-join was changed via KIP-962. This PR updates the docs accordingly.

    Reviewers: Ayoub Omari <ayoubomari1@outlook.fr>, Matthias J. Sax <matthias@confluent.io>

commit 1134520
Author: Ayoub Omari <ayoubomari1@outlook.fr>
Date:   Thu Jun 6 00:05:04 2024 +0200

    KAFKA-16573: Specify node and store where serdes are needed (apache#15790)

    Reviewers: Matthias J. Sax <matthias@confluent.io>, Bruno Cadonna <bruno@confluent.io>, Anna Sophie Blee-Goldman <ableegoldman@apache.org>

commit 896af1b
Author: Sanskar Jhajharia <122860866+sjhajharia@users.noreply.github.com>
Date:   Thu Jun 6 01:46:59 2024 +0530

    MINOR: Raft module Cleanup (apache#16205)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit 0109a3f
Author: Antoine Pourchet <antoine@responsive.dev>
Date:   Wed Jun 5 14:09:37 2024 -0600

    KAFKA-15045: (KIP-924 pt. 17) State store computation fixed (apache#16194)

    Fixed the calculation of the store name list based on the subtopology being accessed.

    Also added a new test to make sure this new functionality works as intended.

    Reviewers: Anna Sophie Blee-Goldman <ableegoldman@apache.org>

commit 52514a8
Author: Greg Harris <greg.harris@aiven.io>
Date:   Wed Jun 5 11:35:32 2024 -0700

    KAFKA-16858: Throw DataException from validateValue on array and map schemas without inner schemas (apache#16161)

    Signed-off-by: Greg Harris <greg.harris@aiven.io>
    Reviewers: Chris Egerton <chrise@aiven.io>

commit f2aafcc
Author: Sanskar Jhajharia <122860866+sjhajharia@users.noreply.github.com>
Date:   Wed Jun 5 20:06:01 2024 +0530

    MINOR: Cleanups in Shell Module (apache#16204)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit bd9d68f
Author: Abhijeet Kumar <abhijeet.cse.kgp@gmail.com>
Date:   Wed Jun 5 19:12:25 2024 +0530

    KAFKA-15265: Integrate RLMQuotaManager for throttling fetches from remote storage (apache#16071)

    Reviewers: Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>

commit 62e5cce
Author: gongxuanzhang <gongxuanzhangmelt@gmail.com>
Date:   Wed Jun 5 18:57:32 2024 +0800

    KAFKA-10787 Update spotless version and remove support JDK8 (apache#16176)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit 02c794d
Author: Kamal Chandraprakash <kchandraprakash@uber.com>
Date:   Wed Jun 5 12:12:23 2024 +0530

    KAFKA-15776: Introduce remote.fetch.max.timeout.ms to configure DelayedRemoteFetch timeout (apache#14778)

    KIP-1018, part1, Introduce remote.fetch.max.timeout.ms to configure DelayedRemoteFetch timeout

    Reviewers: Luke Chen <showuon@gmail.com>

commit 7ddfa64
Author: Dongnuo Lyu <139248811+dongnuo123@users.noreply.github.com>
Date:   Wed Jun 5 02:08:38 2024 -0400

    MINOR: Adjust validateOffsetCommit/Fetch in ConsumerGroup to ensure compatibility with classic protocol members (apache#16145)

    During online migration, there could be ConsumerGroup that has members that uses the classic protocol. In the current implementation, `STALE_MEMBER_EPOCH` could be thrown in ConsumerGroup offset fetch/commit validation but it's not supported by the classic protocol. Thus this patch changed `ConsumerGroup#validateOffsetCommit` and `ConsumerGroup#validateOffsetFetch` to ensure compatibility.

    Reviewers: Jeff Kim <jeff.kim@confluent.io>, David Jacot <djacot@confluent.io>

commit 252c1ac
Author: Apoorv Mittal <apoorvmittal10@gmail.com>
Date:   Wed Jun 5 05:55:24 2024 +0100

    KAFKA-16740: Adding skeleton code for Share Fetch and Acknowledge RPC (KIP-932) (apache#16184)

    The PR adds skeleton code for Share Fetch and Acknowledge RPCs. The changes include:

    1. Defining RPCs in KafkaApis.scala
    2. Added new SharePartitionManager class which handles the RPCs handling
    3. Added SharePartition class which manages in-memory record states and for fetched data.

    Reviewers: David Jacot <djacot@confluent.io>, Andrew Schofield <aschofield@confluent.io>, Manikumar Reddy <manikumar.reddy@gmail.com>

commit b89999b
Author: PoAn Yang <payang@apache.org>
Date:   Wed Jun 5 08:02:52 2024 +0800

    KAFKA-16483: Remove preAppendErrors from createPutCacheCallback (apache#16105)

    The method createPutCacheCallback has a input argument preAppendErrors. It is used to keep the "error" happens before appending. However, it is always empty. Also, the pre-append error is handled before createPutCacheCallback by calling responseCallback. Hence, we can remove preAppendErrors.

    Signed-off-by: PoAn Yang <payang@apache.org>

    Reviewers: Luke Chen <showuon@gmail.com>

commit 01e9918
Author: Kuan-Po (Cooper) Tseng <brandboat@gmail.com>
Date:   Wed Jun 5 07:56:18 2024 +0800

    KAFKA-16814 KRaft broker cannot startup when `partition.metadata` is missing (apache#16165)

    When starting up kafka logManager, we'll check stray replicas to avoid some corner cases. But this check might cause broker unable to startup if partition.metadata is missing because when startup kafka, we load log from file, and the topicId of the log is coming from partition.metadata file. So, if partition.metadata is missing, the topicId will be None, and the LogManager#isStrayKraftReplica will fail with no topicID error.

    The partition.metadata missing could be some storage failure, or another possible path is unclean shutdown after topic is created in the replica, but before data is flushed into partition.metadata file. This is possible because we do the flush in async way here.

    When finding a log without topicID, we should treat it as a stray log and then delete it.

    Reviewers: Luke Chen <showuon@gmail.com>, Gaurav Narula <gaurav_narula2@apple.com>

commit d652f5c
Author: TingIāu "Ting" Kì <51072200+frankvicky@users.noreply.github.com>
Date:   Wed Jun 5 07:52:06 2024 +0800

    MINOR: Add topicIds and directoryIds to the return value of the toString method. (apache#16189)

    Add topicIds and directoryIds to the return value of the toString method.

    Reviewers: Luke Chen <showuon@gmail.com>

commit 7e0caad
Author: Igor Soarez <i@soarez.me>
Date:   Tue Jun 4 22:12:33 2024 +0100

    MINOR: Cleanup unused references in core (apache#16192)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit 9821aca
Author: PoAn Yang <payang@apache.org>
Date:   Wed Jun 5 05:09:04 2024 +0800

    MINOR: Upgrade gradle from 8.7 to 8.8 (apache#16190)

    Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

commit 9ceed8f
Author: Colin P. McCabe <cmccabe@apache.org>
Date:   Tue Jun 4 14:04:59 2024 -0700

    KAFKA-16535: Implement AddVoter, RemoveVoter, UpdateVoter RPCs

    Implement the add voter, remove voter, and update voter RPCs for
    KIP-853. This is just adding the RPC handling; the current
    implementation in RaftManager just throws UnsupportedVersionException.

    Reviewers: Andrew Schofield <aschofield@confluent.io>, José Armando García Sancio <jsancio@apache.org>

commit 8b3c77c
Author: TingIāu "Ting" Kì <51072200+frankvicky@users.noreply.github.com>
Date:   Wed Jun 5 04:21:20 2024 +0800

    KAFKA-15305 The background thread should try to process the remaining task until the shutdown timer is expired. (apache#16156)

    Reviewers: Lianet Magrans <lianetmr@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>

commit cda2df5
Author: Kamal Chandraprakash <kchandraprakash@uber.com>
Date:   Wed Jun 5 00:41:30 2024 +0530

    KAFKA-16882 Migrate RemoteLogSegmentLifecycleTest to ClusterInstance infra (apache#16180)

    - Removed the RemoteLogSegmentLifecycleManager
    - Removed the TopicBasedRemoteLogMetadataManagerWrapper, RemoteLogMetadataCacheWrapper, TopicBasedRemoteLogMetadataManagerHarness and TopicBasedRemoteLogMetadataManagerWrapperWithHarness

    Reviewers: Kuan-Po (Cooper) Tseng <brandboat@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>

commit 2b47798
Author: Chris Egerton <chrise@aiven.io>
Date:   Tue Jun 4 21:04:34 2024 +0200

    MINOR: Fix return tag on Javadocs for consumer group-related Admin methods (apache#16197)

    Reviewers: Greg Harris <greg.harris@aiven.io>, Chia-Ping Tsai <chia7712@gmail.com>
TaiJuWu pushed a commit to TaiJuWu/kafka that referenced this pull request Jun 8, 2024
gongxuanzhang added a commit to gongxuanzhang/kafka that referenced this pull request Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants