Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TreeCache recipe creating heavy load on ZK while reconnecting. #664

Open
Buffer0x7cd opened this issue Mar 30, 2022 · 0 comments · May be fixed by #683
Open

TreeCache recipe creating heavy load on ZK while reconnecting. #664

Buffer0x7cd opened this issue Mar 30, 2022 · 0 comments · May be fixed by #683

Comments

@Buffer0x7cd
Copy link

The TreeCache recipe seems to generate a large amount of traffic while reconnecting to the ZK node ( This usually happens when there is a leader election in the cluster, which forces every client to get disconnected from the cluster and then go into a reconnection loop, which lasts until the leader election finishes. )

self._in_background(self._root.on_reconnected)

Here we can see that the TreeCache tries to reload the entire Tree after getting reconnected to the ZK node. To test this, we ran a 3 node ZK cluster with 5k znodes and around 200 TreeCache clients (The clients use TreeCache to maintain the view of ZK data in memory ). During the leader election process, we noticed a 40 to 50x increase in read traffic (read traffic going from 600rps to 30k rps). This ends up causing a thundering herd problem for the cluster.

After some discussion, we came up with a few ideas that can probably help us to avoid the situation

  1. Instead of reloading the entire Tree after a reconnect event, Only reload the znodes that have been updated between client disconnect and reconnect event. Zk guarantees that watches for the znodes will be triggered once clients get reconnected. we can use this to do a selective update of the znodes.
  2. Introduce a client-side rate limiting which can help us to smooth the traffic burst after a reconnection. Clients should be able to use this to avoid overwhelming the cluster.

Let me know what you think about the issue.

@ceache ceache linked a pull request Nov 12, 2022 that will close this issue
ceache added a commit to ceache/kazoo that referenced this issue Feb 6, 2024
Add a "semaphore_impl" attribute on the various handlers.
Allow a new, optional, `concurrent_request_limit` argument to the client
constructor.
Change the client to bound the number of outstanding async requests with
a semaphore limited to `concurrent_request_limit`.

Fixes python-zk#664
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant