New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZOOKEEPER-2789: Reassign `ZXID` for solving 32bit overflow problem #262

Open
wants to merge 5 commits into
base: master
from

Conversation

Projects
None yet
7 participants
@asdf2014
Member

asdf2014 commented May 23, 2017

If it is 1k/s ops, then as long as $2^{32} / (86400 * 1000) \approx 49.7$ days ZXID will exhausted. But, if we reassign the ZXID into 16bit for epoch and 48bit for counter, then the problem will not occur until after $Math.min(2^{16} / 365, 2^{48} / (86400 * 1000 * 365)) \approx Math.min(179.6, 8925.5) = 179.6$ years.

However, i thought the ZXID is long type, reading and writing the long type (and double type the same) in JVM, is divided into high 32bit and low 32bit part of the operation, and because the ZXID variable is not modified with volatile and is not boxed for the corresponding reference type (Long / Double), so it belongs to non-atomic operation. Thus, if the lower 16 bits of the upper 32 bits are divided into the low 32 bits of the entire long and become 48 bits low, there may be a concurrent problem.

@asdf2014 asdf2014 force-pushed the asdf2014:reassign_zxid branch from 0d7dabb to dd6e196 May 23, 2017

@asdf2014

This comment has been minimized.

Member

asdf2014 commented May 23, 2017

Thinking about some abnormal situations, maybe 24 bit for epoch and 40 bit for counter is more better choice: $Math.min(2^{24} / (24 * 365), 2^{40} / (86400 * 1000 * 365)) \approx Math.min(1915.2, 34.9) = 34.9$ years.

@asdf2014 asdf2014 force-pushed the asdf2014:reassign_zxid branch 2 times, most recently from 330115c to 56e704a May 23, 2017

@asdf2014 asdf2014 force-pushed the asdf2014:reassign_zxid branch from 56e704a to d26f58c May 23, 2017

@asdf2014

This comment has been minimized.

Member

asdf2014 commented May 23, 2017

Seems like all test cases passed, but some problems happened in Zookeeper_operations :: testOperationsAndDisconnectConcurrently1:

     [exec] /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build.xml:1298: The following error occurred while executing this line:
     [exec] /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build.xml:1308: exec returned: 2
     [exec] 
     [exec] Total time: 15 minutes 45 seconds
     [exec] /bin/kill -9 16911 
     [exec]      [exec] Zookeeper_operations::testAsyncWatcher1 : assertion : elapsed 1044
     [exec]      [exec] Zookeeper_operations::testAsyncGetOperation : elapsed 4 : OK
     [exec]      [exec] Zookeeper_operations::testOperationsAndDisconnectConcurrently1FAIL: zktest-mt
     [exec]      [exec] ==========================================
     [exec]      [exec] 1 of 2 tests failed
     [exec]      [exec] Please report to user@zookeeper.apache.org
     [exec]      [exec] ==========================================
     [exec]      [exec] make[1]: Leaving directory `/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build/test/test-cppunit`
     [exec]      [exec] /bin/bash: line 5: 15116 Segmentation fault      ZKROOT=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/src/c/../.. CLASSPATH=$CLASSPATH:$CLOVER_HOME/lib/clover.jar ${dir}$tst
     [exec]      [exec] make[1]: *** [check-TESTS] Error 1
     [exec]      [exec] make: *** [check-am] Error 2
     [exec] 
     [exec]     Running contrib tests.
     [exec] ======================================================================
     [exec] 
     [exec] /home/jenkins/tools/ant/apache-ant-1.9.9/bin/ant -DZookeeperPatchProcess= -Dtest.junit.output.format=xml -Dtest.output=yes test-contrib
     [exec] Buildfile: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build.xml
     [exec] 
     [exec] test-contrib:
     [exec] 
     [exec] BUILD SUCCESSFUL
     [exec] Total time: 0 seconds
@@ -121,8 +121,8 @@ public JsonGenerator(LogIterator iter) {
} else if ((m = newElectionP.matcher(e.getEntry())).find()) {
Iterator<Integer> iterator = servers.iterator();
long zxid = Long.valueOf(m.group(2));
int count = (int)zxid;// & 0xFFFFFFFFL;
int epoch = (int)Long.rotateRight(zxid, 32);// >> 32;
long count = zxid & 0xffffffffffL;

This comment has been minimized.

@nerdyyatrice

nerdyyatrice May 24, 2017

How can this be all over the code base instead of a function somewhere in a util file

This comment has been minimized.

@asdf2014

asdf2014 May 24, 2017

Member

Yeah, you are right!

int count = (int)zxid;// & 0xFFFFFFFFL;
int epoch = (int)Long.rotateRight(zxid, 32);// >> 32;
long count = zxid & 0xffffffffffL;
int epoch = (int)Long.rotateRight(zxid, 40);// >> 40;

This comment has been minimized.

@nerdyyatrice

nerdyyatrice May 24, 2017

same, 40 shouldn't fly around in the code base like this

This comment has been minimized.

@asdf2014

asdf2014 May 24, 2017

Member

Already unify all code those processing ZXID into using ZixdUtils.

@asdf2014 asdf2014 force-pushed the asdf2014:reassign_zxid branch from 98a9ceb to 9c46786 May 24, 2017

@asdf2014

This comment has been minimized.

Member

asdf2014 commented May 24, 2017

Due to this jvm bug, JDK7 cannot recognition import static... I will use fully qualified name replace of it.

    [javac] /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/src/contrib/loggraph/src/java/org/apache/zookeeper/graph/JsonGenerator.java:129: error: cannot find symbol
    [javac] 		    long epoch = getEpochFromZxid(zxid);

@asdf2014 asdf2014 force-pushed the asdf2014:reassign_zxid branch from 79c0d95 to 48ab25b May 24, 2017

@maoling

This comment has been minimized.

Contributor

maoling commented May 24, 2017

1.change the Zxid from high32_low32 to high24_low40 can avoid concurrent problem you hava referred to?
2.because the frequency of leading epoch change is much less than the change of counter,so you choose high24_low40?
3.when Zxid has exhausted,it will start from 0 ? or force to a new leader election ? or something else?

@asdf2014

This comment has been minimized.

Member

asdf2014 commented May 27, 2017

Hi, @maoling. Thanks for your discussion. Maybe due to my description is problematic, so make you confused.

  1. I am worry about if the lower 8 bits of the upper 32 bits are divided into the low 32 bits of the entire long and become 40 bits low, there may be a concurrent problem. Actually, it shouldn't be worried, all operation about ZXID is bit operation rather than = assignment operation. So, it cann't be a concurrent problem in JVM level.

  2. Yep, it is. Especially, if it is 1k/s ops, then as long as $2^{32} / (86400 * 1000) \approx 49.7$ days ZXID will exhausted. And more terrible situation will make the re-election process comes early. At the same time, the "re-election" process could take half a minute. And it will be cannot acceptable.

  3. As so far, it will throw a XidRolloverException to force re-election process and reset the counter to zero.

@maoling

This comment has been minimized.

Contributor

maoling commented Jun 5, 2017

Hi, @asdf2014 .Thanks for your explanation! But I still have some confusions about the question one:
look at code like this :

int epoch = (int)Long.rotateRight(zxid, 32);// >> 32;
long count = zxid & 0xffffffffffL;

it all depends on that zxid can not be altered(no write operation after zxid has generated at the first time) in the multithread situation,otherwise epoch and count isn't idempotent.should zxid be decorated by final?

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Jun 5, 2017

@maoling You are welcome. Already changed it into following code. Then, i think it can still guarantee its idempotency.

long count = ZxidUtils.getCounterFromZxid(zxid);
long epoch = ZxidUtils.getEpochFromZxid(zxid);
@yunfan123

This comment has been minimized.

yunfan123 commented Jun 15, 2017

Hi, I think 48 bits low is better for large throughput zk cluster.
Another benefits is when use 48 bits low we assuming the epoch low than (1<<16), so we can 16 bits high to judge whether it is old version or new version.
So use 48 bits low we can make the upgrade progress smoothly

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Jun 15, 2017

Hi, @yunfan123 . Thank you for your suggestion. As you said in the opinion, so that it can guarantee a smooth upgrade. However, if the 16-bit epoch overflow rather than the counter overflow, it will make Zookeeper cannot keep provide services by re-election anymore. So, i thought we should keep enough space for epoch. What you think?

@yunfan123

This comment has been minimized.

yunfan123 commented Jun 16, 2017

Hi, @asdf2014
In most cases, I don't think the epoch can overflow 16-bit.
In general, zookeeper leader election is very rare, and it may take several seconds even several minutes to finish leader election.
And zookeeper is totally unavailable during leader election.
If the zookeeper that you use can overflow 16-bits, it turns out the zookeeper you used is totally unreliable.
Finally, compatible with old version is really important.
If not compatible with old versions, I must restart all my zookeeper nodes.
All of nodes need reload snapshot and log from disk, it will cost a lot of time.
I believe this upgrade process is unacceptable by most zookeeper users.

@JarenGlover

This comment has been minimized.

JarenGlover commented Dec 7, 2017

@yunfan123 @asdf2014 i have seen this issue a twice over a month period.

is there anything one can do to prevent this from happening? maybe allowing for leader restarts at "off peak hours" weekly?(yuck i know)

it sound like if we can move forward with this if we move to 48 bits low correct?

note version: 3.4.10

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Dec 8, 2017

@JarenGlover It's a good idea, but not the best solution. Still we can use the restart operation to solve this problem without any changes to Zookeeper for now. BTW, you are welcome to get more details of this problem in my blog yuzhouwan.com :-)

@phunt

This comment has been minimized.

Contributor

phunt commented Dec 12, 2017

Are you seeing this behavior with ZOOKEEPER-1277 applied? If so it's a bug in that change, because after that's applied the leader should shutdown as we approach the rollover.

It would be nice to address this by changing the zxid semantics, but I don't believe that's a great idea. Instead I would rather see us address any shortcoming in my original fix (1277)

fwiw - what I have seen people do in this situation is to monitor the zxid and when it gets close (say within 10%) of the rollover they have an automated script which restarts the leader, which forces a re-election. However 1277 should be doing this for you.

Given you are seeing this issue perhaps you can help with resolving any bugs in 1277? thanks!

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Dec 13, 2017

Hi, @phunt . Thank you for your comment. Yeah, we discuss here is due to the ZOOKEEPER-1277 solution is not very well. It causes so many times leader restart. And the restart process even could spend few minutes, which is some situations cannot tolerate it.

image

@phunt

This comment has been minimized.

Contributor

phunt commented Dec 14, 2017

Ok, thanks for the update. fwiw restarting taking a few minutes is going to be an issue regardless, no? Any regular type issue, such as a temporary network outage, could cause the quorum to be lost and leader election triggered.

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Dec 14, 2017

Hi, @phunt . Indeed, the FastLeaderElection algorithm is very efficient. Most of the leader election situation would finished in hundreds milliseconds. However, some real-time stream frameworks such as Apache Kafka and Apache Storm etc, could make lots of pressures into Zookeeper cluster when they carry on too many business data or processing logic. So maybe, the leader election will be triggered very frequently and the process becomes time consuming.

@breed

This comment has been minimized.

Contributor

breed commented Dec 14, 2017

i think it would be much better to extend ZOOKEEPER-1277 to more transparently do the rollover without a full leader election.

the main issue i have with shortening the epoch size is that once the epoch hits the maximum value the ensemble is stuck, nothing can proceed, so we really need to keep the epoch size big enough that we would never hit that condition. i don't think a 16-bit epoch satisfies that requirement.

@asdf2014

This comment has been minimized.

Member

asdf2014 commented Dec 15, 2017

Hi, @breed . Thanks for your comment. You are right, we should keep the enough epoch value to avoid meet the epoch overflow. So i offered a better solution is 24-bit epoch in second comment. Even if the frequency of leader election is once by every single hours, we will not experience the epoch overflow until 1915.2 years later.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment