New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migrate leader election to lease API #81030
migrate leader election to lease API #81030
Conversation
Hi @ricky1993. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test This looks reasonable to me. |
44fa6ff
to
621a0bc
Compare
I'm still confused by your motivations for this change, and imo release notes are required:
^ What is the user story driving this? You want to switch the locking on the fly, that seems like a really bad idea? What am I missing here? /hold |
The ref issue. We should migrate all leader-election to use Lease API (that was designed exactly for this case)." |
@timothysc - endpoints or configmaps are not the objects that we should be doing leader election against. We have an API that was designed exactly for this purpose, which is And we exactly don't want to change it randomly but in two phases: Is that more clear now? |
/test pull-kubernetes-e2e-gce |
/cc @liggitt |
lol, we argued for this years ago but briant grant said no way. I don't know how you slid this through, but works for me. /hold cancel |
/assign @liggitt |
66455ac
to
496ac23
Compare
@wojtek-t @mikedanese PTAL. I will add some unittest for multilock at "staging/src/k8s.io/client-go/tools/leaderelection/leaderelection_test.go" soon. |
staging/src/k8s.io/client-go/tools/leaderelection/resourcelock/multilock.go
Show resolved
Hide resolved
staging/src/k8s.io/client-go/tools/leaderelection/resourcelock/multilock.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/client-go/tools/leaderelection/resourcelock/multilock.go
Outdated
Show resolved
Hide resolved
496ac23
to
e7980b8
Compare
/test pull-kubernetes-kubemark-e2e-gce-big |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't carefully reviewed the test yet, but the non-test logic looks reasonable for me.
But before I will review the test, I would prefer someone else to also review this.
@mikedanese - can you please take a look?
staging/src/k8s.io/client-go/tools/leaderelection/leaderelection_test.go
Outdated
Show resolved
Hide resolved
e7980b8
to
c5e85d9
Compare
friendly ping~ @mikedanese |
staging/src/k8s.io/client-go/tools/leaderelection/leaderelection.go
Outdated
Show resolved
Hide resolved
I would drop RawRecord from the public resource lock API. I don't think you need it. Other than that, this is what I would expect. |
fb1c1e6
to
944bddd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just two minor comments - other than that lgtm
staging/src/k8s.io/client-go/tools/leaderelection/healthzadaptor_test.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/client-go/tools/leaderelection/leaderelection_test.go
Outdated
Show resolved
Hide resolved
Change-Id: I21fd5cdc1af59e456628cf15fc84b2d79db2eda0
944bddd
to
447295a
Compare
LGTM - will let @mikedanese to make a final look |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mikedanese, ricky1993 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Change-Id: I21fd5cdc1af59e456628cf15fc84b2d79db2eda0
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Re-implement #80508, due to the suggestion at #80508 (comment)
If user uses endpoint lock and want to migrate to lease lock, he can switch all components(kcm, scheduler, etc) to "endpointlease" lock and then switch to lease lock safely. Note that the old endpoint lock will not be clean.
Which issue(s) this PR fixes:
Ref #80289
Special notes for your reviewer:
Implement two composite resource locks for migration between lease and endpoint and configmap.
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig scalability
/assign @wojtek-t
/cc @mikedanese @timothysc any suggestions?