-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add prioritized leader election #103442
Add prioritized leader election #103442
Conversation
Hi @howardjohn. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@JeremyOT FYI |
/ok-to-test |
/triage accepted |
Could you add tests where |
This introduces a new concept to the `leaderelection` package, allowing participants to declare a precedence order between certain clients. Example use cases: * I have multiple versions of an application running concurrently in the cluster; I want the newest one to become the leader * I have multiple replicas with different resources. I want the largest replica to be the leader * I am implementing a cross-region controller (think multicluster service discovery) and the geographically closest region to be the leader (to avoid something like a US controller acting on a cluster in Asia, when I have a controller replica already running in Asia) Library users can define an arbitrary key to indicate whatever metadata they choose. They may also choose to implement a KeyComparison function. If present, this function will be used to determine if our key is higher priority than the existing leader; if it is, we will acquire the lock even though its already held by another. Version skew compatibility: old clients will not set a key, nor will they implement a KeyComparison function. If they see a lock with a key set, they will gracefully ignore it, which is not an issue - this just means they will never steal a lock. Because the key is empty, a KeyComparison function must account for `""` as a key value, but they would need to do that regardless as the field is optional. As a result, I do not expect any version skew issues.
b9338f4
to
67fa728
Compare
Yes, I have added them and updated the PR |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: howardjohn The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@howardjohn: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
After adding tests, realized this doesn't work with
I am open to any of the three; (1) seems right but I don't know how much we are able to modify v1 api |
@jiahuif any feedback on #103442 (comment)? If not I will go with (3) |
Ping on #103442 (comment)? Thanks! |
@howardjohn Sorry for missing this out. option 3 will greatly limits the usefulness of this improvement because Lease is mostly used type of leader election. Because you are about to change a stable API, I believe it is better to write a KEP to purpose the feature in a more formal way. See https://github.com/kubernetes/enhancements/tree/master/keps |
Also, preemptible leader lock is entirely new, which I believe requires a KEP anyways. |
@jiahuif I have opened kubernetes/enhancements#2835 |
@@ -120,3 +120,8 @@ func (cml *ConfigMapLock) Describe() string { | |||
func (cml *ConfigMapLock) Identity() string { | |||
return cml.LockConfig.Identity | |||
} | |||
|
|||
// Identity returns the Identity of the lock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Identity returns the Identity of the lock | |
// Key returns the Key of the lock |
// We will pre-empt the existing leader. | ||
klog.V(4).Infof("lock is held by %v with key %v, but our key (%v) evicts it", oldLeaderElectionRecord.HolderIdentity, oldLeaderElectionRecord.HolderKey, le.config.Lock.Key()) | ||
} else { | ||
klog.V(4).Infof("lock is held by %v and has not yet expired", oldLeaderElectionRecord.HolderIdentity) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
klog.V(4).Infof("lock is held by %v and has not yet expired", oldLeaderElectionRecord.HolderIdentity) | |
klog.V(4).InfoS("lock is held and has not yet expired", "holderIdentity", oldLeaderElectionRecord.HolderIdentity) |
@howardjohn: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -101,6 +101,11 @@ func (ll *LeaseLock) Identity() string { | |||
return ll.LockConfig.Identity | |||
} | |||
|
|||
// Key returns the Key of the lock | |||
func (ll *LeaseLock) Key() string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling Update()
or Create()
on the LeaseLock leaves out the HolderKey
. Lease lock is different from the rest of the resource lock impls since it uses the lease spec instead of cramming all the fields in an annotation so we need to find a new place to put the key (putting HolderKey
and priority as a concept in v1.Lease
seems wrong but I'm not super clear on what lease gets used for in k8s besides kubelet heartbeats).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Is this PR still needed, please rebase if so (or we can close it?) |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@howardjohn is this still active? Should it be re-opened? |
@nmittler currently in Istio we have forked the leader election library. kubernetes/enhancements#2835 has been scope-creeped beyond the initial requirements, moving it beyond my bandwidth + expertise. If we can drop those additional requirements or get someone else involved as well I think we could progress this. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This introduces a new concept to the
leaderelection
package, allowingparticipants to declare a precedence order between certain clients.
Example use cases:
cluster; I want the newest one to become the leader (cc @Monkeyanator)
replica to be the leader
service discovery) and the geographically closest region to be the
leader (to avoid something like a US controller acting on a cluster in
Asia, when I have a controller replica already running in Asia) (cc @JeremyOT)
Library users can define an arbitrary key to indicate whatever metadata
they choose. They may also choose to implement a KeyComparison function.
If present, this function will be used to determine if our key is higher
priority than the existing leader; if it is, we will acquire the lock
even though its already held by another.
Version skew compatibility: old clients will not set a key, nor will
they implement a KeyComparison function. If they see a lock with a key
set, they will gracefully ignore it, which is not an issue - this just
means they will never steal a lock. Because the key is empty, a
KeyComparison function must account for
""
as a key value, but theywould need to do that regardless as the field is optional. As a result,
I do not expect any version skew issues.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: