-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid shard id update of replica if not matching with primary shard id #573
base: unstable
Are you sure you want to change the base?
Conversation
7cabc57
to
1714613
Compare
I am not sure I understand the event sequence that leads to a corrupt state. can you elaborate? The change makes sense to me. Essentially with this change there is now an order in which the shard-id is updated in a shard: primary first and replicas next. btw, this change also requires us to sequence the assignment of the primary before the invocation of There are some timeout failures in the test pass though. that is a bit surprising. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the top comment picture, i think now i understand the case. the changes LGTM, btw the test seem to keep failing.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #573 +/- ##
============================================
+ Coverage 70.20% 70.31% +0.10%
============================================
Files 111 111
Lines 60242 60243 +1
============================================
+ Hits 42295 42360 +65
+ Misses 17947 17883 -64
|
|
From further investigation, the timeout failure happens from an infinite while loop within this block.
https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L5855C1-L5858C2 Looks like there could be temporary invalid state in cluster where node(s) can be pointing to each other as primary/replica. We could take two approaches to this infinite loop:
|
We have had multiple of these issues in the past, and I think we always tried to figure it out. Maybe we should use this chance to add a helper method for setting the replicaof so that we check for loops. |
And if we detect a loop, do we crash? |
Maybe we debug assert crash (as in only crash during a test). For normal production, we unwind we maybe ignore it and wait for the other node to update us. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall but would be great if you could provide some more context in the code comment (left a review feedback too)
debugAssert is reasonable but I don't think we should crash the server just because there is a loop. In fact, we have logic to break the loop already. I will suggest a fix in #609 |
@hpatro Sorry for taking so long to circle back on this, the DCO was failing last time and I forgot to ping you to update. I think this is good to merge otherwise. |
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
69a7d96
to
770cfa9
Compare
@madolson Had to force push. PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just want to wait for some more comprehensive tests.
We actually hit the replication cycle assert rather consistently in the test run @madolson shared above. This is something that I haven't seen before.
|
Yeah, this change invokes the API more frequently. Someone needs to deep dive further to understand how we reach this state. |
I deep dived it with an AWS engineer last week, I have a partial fix and will post it early next week. |
I took a look too and realized it’s a regression introduced by my slot migration PR #445. This change started allowing a replica to report its primary’s slot states and trigger clusterUdpateSlotsConfigWith. PR #445 - Slot Migration Changes. Here's what I think happens in these test failures involving a 3-node shard: [T1] - Node At this point, |
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
This still fails after merging #754 due to primary-replica cycle. Still needs deep dive. |
I think we should consider if this PR is still needed if/when we reduce the delay (see: #778) - This was a great PR and moved mountains in terms of figuring out what was wrong, but it would be great to reduce the delay entirely Instead of mitigating the effects of shard ID not being stabilized, we can instead connect the needed flags to the node immediately during the handshake, thus avoiding this situation entirely. This approach will also have the benefit of increasing the speed of stabilization, as there will be less "hops" needed to reach a shard ID consensus. |
I have a theory about how this could happen.
Now, imagine the following scenario [
I have seen stale messages in the past and I also notice that the latest failure in the codecov run, which could alter the timing quite a bit so I think this theory is very plausible. The fix would be to bail immediately after detecting the stale message Line 3273 in 2b76c8f
BTW, we have another undetected stale message issue (#798) |
Yeah I think we will need both. Let me pick up my slack next ... :( |
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
The tests still fail for |
Shard_id shouldn't be updated for a replica if the shard_id for the primary is different.
During cluster setup, the shard id gets established through extensions data propagation and if the engine crashes/restarts while the reconciliation of shard id is in place, there is a possibility of corrupted config file and leads to failure of engine restart.
Scenario:
Let's say there are two nodes in a cluster i.e. Node A and Node B. All the admin operation is performed on Node B. Node A and Node B finish handshake and haven't shared the extensions information yet. Node B is made a replica of Node A. As part of Node B sharing the
slaveof
information, it also share(s) the temporary shard-id. During the regular packet processing in Node A, while handling the replication information, the shard id of Node A get(s) applied to Node B. And during the extensions processing in Node A, the shard id passed by Node B is applied which diverges from the shard id of Node A. A crash/restart followed by it leads to unrecoverable corrupted cluster configuration file state.