Skip to content

Generated RBAC for ClusterRole is missing <resource>/status #1996

@Jeansen

Description

@Jeansen

Bug Report

What did you do?

What did you expect to see?

Generated RBAC for ClusterRole is missing /status

What did you see instead? Under which circumstances?

Environment

Kubernetes cluster type:

Plain Kuberntes, created with kubeadm

$ Mention java-operator-sdk version from pom.xml file 6.2.1"

$ java -version JIB

$ kubectl version 1.27.4

Possible Solution

Additional context

I use Quarkus Native with JIB to build the operator. Running in DEV Mode works fine. But as soon, As I build a release which creates an image (pushed in my local registry) and then apply the generated manifests, the operator complains, that it is missing some rights. Here is the yaml-file ouptut for my little test project:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    app.quarkus.io/build-timestamp: 2023-08-05 - 19:04:19 +0000
  labels:
    app.kubernetes.io/managed-by: quarkus
    app.kubernetes.io/version: 1.0-SNAPSHOT
    app.kubernetes.io/name: cert-operator
  name: cert-operator
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: server-cert-approve-cluster-role
  namespace: default
rules:
  - apiGroups:
      - certificates.k8s.io
    resources:
      - certificatesigningrequests
      - certificatesigningrequests/finalizers
    verbs:
      - get
      - list
      - watch
      - patch
      - update
      - create
      - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: josdk-crd-validating-cluster-role
  namespace: default
rules:
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: server-cert-approve-crd-validating-role-binding
  namespace: default
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: josdk-crd-validating-cluster-role
subjects:
  - kind: ServiceAccount
    name: cert-operator
    namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: server-cert-approve-cluster-role-binding
  namespace: default
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: server-cert-approve-cluster-role
subjects:
  - kind: ServiceAccount
    name: cert-operator
    namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cert-operator-view
  namespace: default
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: view
subjects:
  - kind: ServiceAccount
    name: cert-operator
    namespace: default
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    app.quarkus.io/build-timestamp: 2023-08-05 - 19:04:19 +0000
  labels:
    app.kubernetes.io/name: cert-operator
    app.kubernetes.io/version: 1.0-SNAPSHOT
    app.kubernetes.io/managed-by: quarkus
  name: cert-operator
  namespace: default
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: cert-operator
    app.kubernetes.io/version: 1.0-SNAPSHOT
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/build-timestamp: 2023-08-05 - 19:04:19 +0000
  labels:
    app.kubernetes.io/managed-by: quarkus
    app.kubernetes.io/version: 1.0-SNAPSHOT
    app.kubernetes.io/name: cert-operator
  name: cert-operator
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/version: 1.0-SNAPSHOT
      app.kubernetes.io/name: cert-operator
  template:
    metadata:
      annotations:
        app.quarkus.io/build-timestamp: 2023-08-05 - 19:04:19 +0000
      labels:
        app.kubernetes.io/managed-by: quarkus
        app.kubernetes.io/version: 1.0-SNAPSHOT
        app.kubernetes.io/name: cert-operator
      namespace: default
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: proxy-ng:443/quarkus/cert-operator:1.0-SNAPSHOT
          imagePullPolicy: Always
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/live
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          name: cert-operator
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/ready
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          startupProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/started
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
      serviceAccountName: cert-operator

If I add - certificatesigningrequests/status in the ClusterRole and apply, the operator works fine.

Here's the relevant code:

@ControllerConfiguration(name = "server-cert-approve")
class ExposedAppReconciler(private val client: KubernetesClient) : Reconciler<CertificateSigningRequest?> {

    override fun reconcile(resource: CertificateSigningRequest?, context: Context<CertificateSigningRequest?>?): UpdateControl<CertificateSigningRequest?> {
        val logger = Logger.getLogger(ExposedAppReconciler::class.toString())

        logger.log(Level.INFO, "Found new pending CSR: ${resource?.metadata?.name}")
        try {
            val r = client.certificates().v1().certificateSigningRequests().resource(resource)
            r.item().status.conditions.find { it.type != "Approved" } ?: {
                logger.log(Level.INFO, "Approving ${resource?.metadata?.name}")
                r.approve()
            }
        } catch (e: KubernetesClientException) {
            logger.log(Level.WARNING, "Server Timeout")
        }

        return UpdateControl.patchStatus(resource)
    }
}

Maybe I am missing something and everything works as intended. But to my understanding, I would expect the generated manifests to be complete. Any help or hint is highly appreciated!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions