Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve client partition table update push mechanism #16937

Merged
merged 1 commit into from May 4, 2020

Conversation

mdogan
Copy link
Contributor

@mdogan mdogan commented Apr 29, 2020

When a partition table update is detected, a member pushes
updated partition table to its clients.

When there are many clients (hundreds or more), partition table updates
cause big latencies in migration system. Reason is, partition service's lock must be acquired
to create partition table's latest view and this is called on every partition update
for every client.

To fix that, two improvements are done:

  • Avoid creating partition table view for every client. A new partition
    table object can be created once and the same object can be pushed to all clients.
    This will reduce lock contention significantly. (Btw, this is already done in Hazelcast 4.0+.)

  • Skip some intermediate partition table updates. There's no need to push every update,
    because once partition table updates begin, generally there'll be many. Most of them will be stale
    in a short time. It's fine to skip some to reduce push frequency and lock contention.

4.0: #16938
4.1: #16939

@mdogan mdogan force-pushed the client-ptable-update branch 3 times, most recently from f633162 to 7b42e14 Compare April 30, 2020 09:06
When a partition table update is detected, a member pushes
updated partition table to its clients.

When there are many clients (hundreds or more), partition table updates
cause big latencies in migration system. Reason is, partition service's lock must be acquired
to create partition table's latest view and this is called on every partition update
for every client.

To fix that, two improvements are done:

- Avoid creating partition table view for every client. A new partition
table object can be created once and the same object can be pushed to all clients.
This will reduce lock contention significantly. (Btw, this is already done in Hazelcast 4.0+.)

- Skip some intermediate partition table updates. There's no need to push every update,
because once partition table updates begin, generally there'll be many. Most of them will be stale
in a short time. It's fine to skip some to reduce push frequency and lock contention.
@hazelcast hazelcast deleted a comment May 4, 2020
@mmedenjak mmedenjak added the Source: Internal PR or issue was opened by an employee label May 4, 2020
@mmedenjak mmedenjak merged commit 9c4ca8c into hazelcast:3.12.z May 4, 2020
@mdogan mdogan deleted the client-ptable-update branch May 5, 2020 10:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants