-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IG: "kops.k8s.io/instancegroup" property missing under "nodeLabels" for instance groups created via "kops create cluster" command #16378
Comments
Is this a bug? Without too much understanding of the codebase design, I notice there's a fallback to "node" role type. If I am not off-track, this is more of an enhancement for a null check anti-pattern due to a hotspot in the codebase, so perhaps nothing to worry about. |
@teocns Not sure if I understand your comment but the issue is not related with the actual node type, that works just fine. The issue is that the label (at node level) which identifies the KOPs instance group for each specific node is missing when using the For our environments that was a breaking change (and we had to manually update+rollout the IGs) because we actively use things like kubernetes anti/affinity rules and prometheus metrics which rely in the value of the The actual yaml/property that I would expect to be present for each IG created via nodeLabels:
kops.k8s.io/instancegroup: <IG_NAME> |
Gootcha, you rely on the label as an affinity selector within your own workflow, while my observation was oriented more towards kops' own functional integrity. Thanks for clarifying - |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Tested with
Client version: 1.28.4 (git-v1.28.4)
andClient version: 1.27.3 (git-v1.27.3)
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.N/A
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
5. What happened after the commands executed?
Check
Answer 4.
6. What did you expect to happen?
I would expect that all properties, specially
kops.k8s.io/instancegroup
undernodeLabels
, to also be created when usingkops create cluster
command, the same waykops create instancegroup
does.The whole
kubelet
property is missing when creating cluster, so maybe the ideal would be to have all "default" properties aligned between create cluster and create instancegroup commands.7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
Check
Answer 4.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
I have many existing clusters (k8s v1.27 which have been upgraded many times) with the
kops.k8s.io/instancegroup
node label set, so this may have been working before, or it may have been set as part of a previous kops upgrade.The text was updated successfully, but these errors were encountered: