Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify naming patterns for Kyverno ClusterRoles/ClusterRoleBindings #2904

Closed
andriktr opened this issue Jan 5, 2022 · 10 comments · Fixed by #3029 or #3032
Closed

Clarify naming patterns for Kyverno ClusterRoles/ClusterRoleBindings #2904

andriktr opened this issue Jan 5, 2022 · 10 comments · Fixed by #3029 or #3032
Assignees
Labels
Documentation Update Documentation helm Issues dealing with the Helm chart

Comments

@andriktr
Copy link

andriktr commented Jan 5, 2022

Hello,
I'm trying to upgrade kyverno 1.5.0 to kyverno 1.5.2.
My kyverno pods are not running after upgrade:

E0105 09:21:04.250315       1 common.go:64] Register "msg"="failed to get cluster role" "error"="failed to get cluster role with suffix webhook"
I0105 09:21:04.848559       1 registration.go:641] Register "msg"="Endpoint ready"  "name"="kyverno-svc" "ns"="kyverno"
E0105 09:21:05.240544       1 registration.go:291] Register "msg"="failed to create resource mutating webhook configuration" "error"="MutatingWebhookConfiguration.admissionregistration.k8s.io \"kyverno-resource-mutating-webhook-cfg\" is invalid: [metadata.ownerReferences.apiVersion: Invalid value: \"\": version must not be empty, metadata.ownerReferences.kind: Invalid value: \"\": kind must not be empty, metadata.ownerReferences.name: Invalid value: \"\": name must not be empty, metadata.ownerReferences.uid: Invalid value: \"\": uid must not be empty]" "kind"="MutatingWebhookConfiguration" "name"="kyverno-resource-mutating-webhook-cfg"

It's seems that kyverno is looking for a cluster role with suffix "webhook", but as I'm creating cluster roles and rolebindings for kyverno service account outside of kyverno helm chart and setting rbac config in kyvern chart to:

rbac:
  create: false
  serviceAccount:
    create: false
    name: kyverno
    annotations: {}
    #   example.com/annotation: value

Clusterroles which I'm creating for kyverno separately named bit different.

Very similar problem is described here. And I have tried to add app.kubernetes.io/ownerreference: "true" label to my clusteroles for kyverno, but it doesn't help.

@vyankyGH I see that you worked on this, any thoughts?

Thanks in advance.

@andriktr andriktr added the bug Something isn't working label Jan 5, 2022
@vyankyGH
Copy link
Contributor

vyankyGH commented Jan 5, 2022

@andriktr can you please check if your cluster role has label of app.kubernetes.io/name: "kyverno".
we have already removed based on label app.kubernetes.io/ownerreference: "true" as it becoming very specific .

@andriktr
Copy link
Author

andriktr commented Jan 5, 2022

@vyankyGH I have added a app.kubernetes.io/name: "kyverno" but it's still not working and same errors occurs

E0105 11:07:11.177594       1 common.go:64] Register "msg"="failed to get cluster role" "error"="failed to get cluster role with suffix webhook"
I0105 11:07:11.571345       1 registration.go:641] Register "msg"="Endpoint ready"  "name"="kyverno-svc" "ns"="kyverno"
E0105 11:07:11.965318       1 registration.go:291] Register "msg"="failed to create resource mutating webhook configuration" "error"="MutatingWebhookConfiguration.admissionregistration.k8s.io \"kyverno-resource-mutating-webhook-cfg\" is invalid: [metadata.ownerReferences.apiVersion: Invalid value: \"\": version must not be empty, metadata.ownerReferences.kind: Invalid value: \"\": kind must not be empty, metadata.ownerReferences.name: Invalid value: \"\": name must not be empty, metadata.ownerReferences.uid: Invalid value: \"\": uid must not be empty]" "kind"="MutatingWebhookConfiguration" "name"="kyverno-resource-mutating-webhook-cfg"
E0105 11:07:11.965372       1 main.go:403] setup "msg"="Timeout registering admission control webhooks" "error"=null

here is how my cluster role looks like:

# Cluster role for Kyverno service account required for standard kyverno operations (should be updated accordingly if kyverno releases update for cluster-role.yaml)
- name: if-baltic-kyverno-standard
  enabled: true
  labels:
    app.kubernetes.io/ownerreference: "true"
    app.kubernetes.io/name: "kyverno"
    app: kyverno
  rules: 
  - apiGroups:
    - coordination.k8s.io
    resources:
    - leases
    verbs:
    - create
    - delete
    - get
    - patch
    - update
  - apiGroups:
    - '*'
    resources:
    - events
    - mutatingwebhookconfigurations
    - validatingwebhookconfigurations
    - certificatesigningrequests
    - certificatesigningrequests/approval
    verbs:
    - create
    - delete
    - get 
    - list
    - patch
    - update
    - watch
  - apiGroups:
    - certificates.k8s.io
    resources:
    - certificatesigningrequests
    - certificatesigningrequests/approval
    - certificatesigningrequests/status
    resourceNames:
      - kubernetes.io/legacy-unknown
    verbs:
    - create
    - delete
    - get 
    - update
    - watch
  - apiGroups:
    - certificates.k8s.io
    resources:
    - signers
    resourceNames:
    - kubernetes.io/legacy-unknown
    verbs:
    - approve 
  - apiGroups:
    - "*"
    resources:
    - roles
    - clusterroles
    - rolebindings
    - clusterrolebindings
    - configmaps
    - namespaces
    verbs:
    - watch
    - list
  - apiGroups:
    - '*'
    resources:
    - policies
    - policies/status
    - clusterpolicies
    - clusterpolicies/status
    - policyreports
    - policyreports/status
    - clusterpolicyreports
    - clusterpolicyreports/status
    - generaterequests
    - generaterequests/status
    - reportchangerequests
    - reportchangerequests/status
    - clusterreportchangerequests
    - clusterreportchangerequests/status
    verbs:
    - create
    - delete
    - get
    - list
    - patch
    - update
    - watch
  - apiGroups:
    - 'apiextensions.k8s.io'
    resources:
    - customresourcedefinitions
    verbs:
    - delete
  - apiGroups:
    - '*'
    resources:
    - '*'
    verbs:
    - get
    - list
    - update
    - watch
  - apiGroups:
    - "*"
    resources:
    - namespaces
    - networkpolicies
    - secrets
    - configmaps
    - resourcequotas
    - limitranges
    verbs:
    - create
    - update
    - delete
    - list
    - get
  - apiGroups:
    - '*'
    resources:
    - namespaces
    verbs:
    - watch
  - apiGroups:
    - kyverno.io
    resources:
    - policies
    - clusterpolicies
    verbs:
    - "*"
  - apiGroups:
      - wgpolicyk8s.io/v1alpha1
    resources:
      - policyreport
      - clusterpolicyreport
    verbs:
      - '*'
  - apiGroups:
    - kyverno.io
    resources:
    - reportchangerequests
    - clusterreportchangerequests
    verbs:
    - "*"

@vyankyGH
Copy link
Contributor

vyankyGH commented Jan 5, 2022

@andriktr looks like kyverno not able to find clusterrole with suffix webhook.
clusterrole name should be if-baltic-kyverno-standard:webhook

@andriktr
Copy link
Author

andriktr commented Jan 5, 2022

@vyankyGH are there any specific reasons to tie kyverno with a clusterrole having such suffix?
This doesn't look very flexible/dynamic IMO.

@vyankyGH vyankyGH self-assigned this Jan 5, 2022
@vyankyGH vyankyGH added this to the Kyverno Release 1.6.0 milestone Jan 5, 2022
@andriktr
Copy link
Author

andriktr commented Jan 6, 2022

@vyankyGH FYI. Once I changed the name of clusterrole to contain :webhook suffix upgrade went successfully and kyverno works as expected. However still think that having hardcoded suffix is not the best approach.

@realshuting
Copy link
Member

Hi @andriktr - :webhook suffix indicating the permissions needed for the kyverno webhook server, it helps group the roles for this controller and is easier to maintain.

Is there any particular reason you need to custom role's name?

@andriktr
Copy link
Author

andriktr commented Jan 6, 2022

Hi @realshuting. Personally for me it doesn't meter which name for kyverno clusterrole to set. Kyverno helm chart allows to skip creation of cluster roles that means what cluster roles should be created outside the kyverno deployment and if role name or it's suffix is hardcoded then it should be at least pointed out somewhere which naming pattern to use if I'm creating a role outside of kyverno helm chart.

@realshuting
Copy link
Member

Good point! Yes we should document the naming convention for ClusterRoles'/ClusterRoleBindings' name.

However I don't think we want to make rbac.create configurable as Kyverno wouldn't be functioning if these RBAC resources are missing.

rbac:
create: true

@andriktr
Copy link
Author

andriktr commented Jan 6, 2022

But it is configurable already if you set rbac.create: false when clusterroles and bindings will be not created by kyverno helm chart.
In our case we create kyverno SA and RBAC for it in a separate common helm chart where we manage permissions for whole cluster. It adds some additional manage, but such approach allows to minimize permissions for CI|CD service accounts and allows centrally track/review/approve all the cluster wide permissions.

@realshuting
Copy link
Member

realshuting commented Jan 6, 2022

It adds some additional manage, but such approach allows to minimize permissions for CI|CD service accounts and allows centrally track/review/approve all the cluster wide permissions.

That makes sense!

@vyankyGH - let's clarify naming patterns in the Helm charts and update https://github.com/kyverno/kyverno/blob/main/charts/kyverno/README.md.

@realshuting realshuting changed the title [BUG] Kyverno upgrade fails due to clusterrole names Clarify naming patterns for Kyverno ClusterRoles/ClusterRoleBindings Jan 6, 2022
@realshuting realshuting added Documentation Update Documentation and removed bug Something isn't working labels Jan 6, 2022
@chipzoller chipzoller added the helm Issues dealing with the Helm chart label Jan 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment