-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LeaderElection: Lease resource left behind #4638
Comments
The initial thought was to allow the user to set an owner reference as part of the LeaderElectionConfig. However this functionality is not part of the go client, and given that a likely owner (a Deployment) can be deleted before their underlying pods are fully terminated this leaves open some potentially odd behavior while things are cleaning up (at best the lock would go away, then fail to recreate as the owner no longer exists). I cannot come up with a better built-in alternative though. Any other thoughts from the fabric8 community? |
also adding a leadership integration test
@katheris the proposal in the pr is to give the user full control over the ObjectMeta used to create the lock in the Lease or ConfigMap lock constructor, does this seem like a good solution for you? |
also adding a leadership integration test
also adding a leadership integration test
also adding a leadership integration test
also adding a leadership integration test
Describe the bug
There is currently no way to have Lease resources automatically cleaned up. There should be a mechanism to provide an owner reference.
Fabric8 Kubernetes Client version
SNAPSHOT
Steps to reproduce
Expected behavior
There should be a mechanism to get this Lease resource cleaned up.
Runtime
minikube
Kubernetes API Server version
1.25.3@latest
Environment
macOS
Fabric8 Kubernetes Client Logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: