Skip to content
This repository has been archived by the owner on Jun 20, 2023. It is now read-only.

[1.10.0] update RBAC for cilium operator #39

Closed
errordeveloper opened this issue Apr 30, 2021 · 2 comments · Fixed by #40
Closed

[1.10.0] update RBAC for cilium operator #39

errordeveloper opened this issue Apr 30, 2021 · 2 comments · Fixed by #40

Comments

@errordeveloper
Copy link
Contributor

errordeveloper commented Apr 30, 2021

cilium-olm pod is running, but the operator keeps erroring:

2021-04-30T14:25:03.591Z	ERROR	helm.controller	Release failed	{"namespace": "cilium", "name": "cilium", "apiVersion": "cilium.io/v1alpha1", "kind": "CiliumConfig", "release": "cilium", "error": "failed to install release: clusterroles.rbac.authorization.k8s.io \"cilium-operator\" is forbidden: user \"system:serviceaccount:cilium:cilium-olm\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:cilium\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"\"], Resources:[\"services/status\"], Verbs:[\"update\"]}"}

as result of this, the operator is not able to install the Cilium chart.

@errordeveloper
Copy link
Contributor Author

The clusterrole for the OLM operator needs to get updated to accommodate for this.

@errordeveloper
Copy link
Contributor Author

$ diff -u operator/cilium.v1.9.6/cilium/templates/cilium-operator-clusterrole.yaml operator/cilium.v1.10.0-rc1/cilium/templates/cilium-operator-clusterrole.yaml 
--- operator/cilium.v1.9.6/cilium/templates/cilium-operator-clusterrole.yaml	2021-04-26 19:23:29.000000000 +0100
+++ operator/cilium.v1.10.0-rc1/cilium/templates/cilium-operator-clusterrole.yaml	2021-04-30 13:47:01.000000000 +0100
@@ -1,4 +1,24 @@
 {{- if .Values.operator.enabled }}
+
+{{- /* Workaround so that we can set the minimal k8s version that we support */ -}}
+{{- $k8sVersion := .Capabilities.KubeVersion.Version -}}
+{{- $k8sMajor := .Capabilities.KubeVersion.Major -}}
+{{- $k8sMinor := .Capabilities.KubeVersion.Minor -}}
+
+{{- if .Values.Capabilities -}}
+{{- if .Values.Capabilities.KubeVersion -}}
+{{- if .Values.Capabilities.KubeVersion.Version -}}
+{{- $k8sVersion = .Values.Capabilities.KubeVersion.Version -}}
+{{- if .Values.Capabilities.KubeVersion.Major -}}
+{{- $k8sMajor = toString (.Values.Capabilities.KubeVersion.Major) -}}
+{{- if .Values.Capabilities.KubeVersion.Minor -}}
+{{- $k8sMinor = toString (.Values.Capabilities.KubeVersion.Minor) -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
@@ -26,6 +46,21 @@
 - apiGroups:
   - ""
   resources:
+  - services
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - ""
+  resources:
+  # to perform LB IP allocation for BGP
+  - services/status
+  verbs:
+  - update
+- apiGroups:
+  - ""
+  resources:
   # to perform the translation of a CNP that contains `ToGroup` to its endpoints
   - services
   - endpoints
@@ -71,13 +106,9 @@
 # For cilium-operator running in HA mode.
 #
 # Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
-# between mulitple running instances.
+# between multiple running instances.
 # The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
 # common and fewer objects in the cluster watch "all Leases".
-# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
-# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
-# that we only authorize access to leases resources in supported K8s versions.
-{{- if or (ge .Capabilities.KubeVersion.Minor "14") (gt .Capabilities.KubeVersion.Major "1") }}
 - apiGroups:
   - coordination.k8s.io
   resources:
@@ -87,4 +118,3 @@
   - get
   - update
 {{- end }}
-{{- end }}
$ 

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant