-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cilium-operator is missing RBAC permission to remove node.cilium.io/agent-not-ready
taint
#15464
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.26
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.25.9
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
node.cilium.io/agent-not-ready
taint to a kops instance groupdebug
log switch in the kops config5. What happened after the commands executed?
No pods can be scheduled anymore on nodes as the cilium related taint cannot be removed.
6. What did you expect to happen?
Cilium-operator pods should be allowed to patch nodes in order to remove the cilium related taint.
9. Anything else do we need to know?
We added this taint as per the cilium recommended installation guide and to avoid pods being scheduled before the CNI is actually working as expected.
It seems that the ClusterRole definition for the cilium-operator SA is not following the actual values from the official cilium helm chart.
kops/upup/models/cloudup/resources/addons/networking.cilium.io/k8s-1.16-v1.12.yaml.template
Lines 419 to 579 in 1bef619
The text was updated successfully, but these errors were encountered: