-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kops-controller stale node label values #10185
Comments
I suspect this was fixed by #9575 |
I'm getting the same behaviour in most of my attempts , with the caveat that the labels do eventually change. Once I discovered that the labels eventually changed (by accident), I measured it to take 52 minutes after the first replacement node became available according to Further, to add to the confusion, I did get an attempt in which the labels changed within around 15 minutes (can't be more precise, as I wasn't measuring times in that attempt).
Version 1.18.2 (git-84495481e4)
AWS
On an instance group with the labels edit the instance group to the Run 5 What happened after the commands executed? Newly spawned nodes still have the previous value for the label
Expected nodes with correct/new labels, i.e.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Fixed in 1.20 by #9575 |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Version 1.18.2
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
test=test
5. What happened after the commands executed?
Newly spawned nodes still have "old" value for label
test=test
.6. What did you expect to happen?
Expected nodes with correct/new labels.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
N/A
9. Anything else do we need to know?
By looking at (leader) kops-controller logs, it seems that it's unaware of node label changes.
By deleting leader pod, new leader started to patching nodes with new labels.
I'm unaware of any configurations regarding AWS metadata / kops resources refresh for kops-controller, so my wild guess is that kops-controller is unaware of label changes (probably reading state file from s3 at the beginning of it's leader mandate).
The text was updated successfully, but these errors were encountered: