Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-2437: Fix ZookeeperLeaderElector to handle node deletion correctly. #189

Closed
wants to merge 1 commit into from

Conversation

becketqin
Copy link
Contributor

No description provided.

leaderId = KafkaController.parseControllerId(data.toString)
info("New leader is %d".format(leaderId))
// The old leader need to resign leadership if it is no longer the leader
if (amILeaderBeforeDataChange && !amILeader)
onResigningAsLeader
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment: this should include () as it has a side-effect. +1 otherwise.

@asfgit asfgit closed this in 6f398d6 Sep 3, 2015
@asfbot
Copy link

asfbot commented Sep 3, 2015

kafka-trunk-git-pr #334 FAILURE
Looks like there's a problem with this pull request

benstopford pushed a commit to benstopford/kafka that referenced this pull request Sep 10, 2015
…tly.

Author: Jiangjie Qin <becket.qin@gmail.com>

Reviewers: Joel Koshy <jjkoshy.w@gmail.com>

Closes apache#189 from becketqin/KAFKA-2437
C0urante pushed a commit to C0urante/kafka that referenced this pull request May 10, 2019
support-metrics-client should depend on the test sources from
support-metrics-common.

KafkaUtilitiesTest.scala is simply a Scala copy of a Java test, so
remove it.  This also fixes a problem where support-metrics-client and
support-metrics-common had diferent classes with the same name.

Port over MetricsReporterTest.java, MetricsToKafkaTest.java,
SupportedServerStartableTest.java, BasicCollectorTest.java,
CollectorFactoryTest.java from the old support-metrics-client
repository.  These were missed in the original conversion.

Reviewers: Dhruvil Shah <dhruvilshah05@gmail.com>
jsancio pushed a commit to jsancio/kafka that referenced this pull request Aug 6, 2019
Implements replication for tiered storage.

Adds two additional states to the replica fetcher: MaterializingTierMetadata and FetchingTierState. The fetcher thread will attempt to fetch from the leader, and upon hitting an OFFSET_TIERED error, will perform a TIER_LIST_OFFSETS request to retrieve the local disk offsets for the partition. It will then transition to MaterializingTierMetadata state, and will materialize tier metadata asynchronously until the metadata overlaps with the leaders local offsets. After doing so, it will fetch the tiered epoch state that aligns with the point that we will start replicating from, and will restore this epoch state. At any point, if we hit an error, we go back to the start, TIER_LIST_OFFSETS request -> MaterializingTierMetadata -> FetchingTierState. This approach has the advantage of replicating the hot set by always replicating the data that the leader has on disk.

The TierPartitionState now allows materialization of overlapping segment ranges, as long as they contain some additional data. The tier fetcher will currently preferentially fetch from the segment with the highest base offset.

As part of the replication tier state restore path a TierStateFetcher was added for use in fetching tier state such as the leader epoch state and the producer state snapshot from the object store. This is used by the replication code, and is primarily included so that state restoration has a separate queue from the TierFetcher, as we want this state to be restored as quickly as possible, and not be stuck behind large topic fetch requests.

Additional features:
- adds the ability to perform a TierListOffsetRequest to retrieve the leader's local log start and end offset
- adds kafka.tier.tools.DumpTierPartitionState to dump the contents of FileTierPartitionState as text.

Testing:

Adds several replication unit tests via mocking in AbstractFetcherThreadTest. Tests scenarios where tier state has to be restored, as well as scenarios where the local log is empty and no tier state restore is require.
TierEpochStateReplicationTest: two broker test where producer produces, follower is shutdown, producer continues producing, follower is started up, and the follower restores tier state.
TierEpochStateRevolvingReplicationTest: three broker test where a random follower is shutdown, producer produces, then the follower is started back up. Each restart requires the tier state to be restored. On each restart the tier epoch cache is compared between all brokers.

Co-authored-by: Dhruvil Shah <dhruvil@confluent.io>
efeg pushed a commit to efeg/kafka that referenced this pull request Jan 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants