-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor pd job mechanism #103
Comments
How ToWe have only one final state for every region.
For conf changeLeader 1 for region a asks ChangePeer, pd adds 2 to region a directly, so the region meta in etcd contains peer 1, 2 now. The region in TiKV must enter this final state. If leader 1 asks ChangePeer again, pd still replies adding peer 2 directly until pd finds region a has peer 1, 2 and then does the following Conf change. For splitLeader 1 for region a [a-c) asks Split, pd allocs the new region ID and peer IDs only. Problem: If region a reports status first, but region b delayed, we may meet a gap in key range, because we don't know where to find the data in [b, c). But this gap may be fixed later. |
/cc @disksing Seems something should be done in |
Do you mean all command? including read-only request such as GetRegion?
This brings risk for dead lock. we have two 'state', one is in raft node, the other in pd.
the whole process forms a ring, if raft have some trouble and can't finish the job( although I haven't come up with a specific case), then dead lock will occur. |
b.t.w as long as we have GetRegion, those changes don't affect #102 much.... i.e. if peer found itself inactive for a long time, then it ask pd whether it's still live. |
Pd now has no job, so no deadlock. |
Problems
## Principle## How to(Deprecated)/cc @ngaut @qiuyesuifeng @disksing @tiancaiamao
The text was updated successfully, but these errors were encountered: