Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provisioner pod created in default namespace #92

Closed
aivanov-citc opened this issue Jul 14, 2023 · 8 comments
Closed

Provisioner pod created in default namespace #92

aivanov-citc opened this issue Jul 14, 2023 · 8 comments
Assignees

Comments

@aivanov-citc
Copy link

Talos clusters use Pod Security Standards by default and do not allow the creation of privileged pods. To create privileged pods in a namespace, you need to add special annotations to the namespace.

pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: latest

Now the Provisioner Pod is created in the default namespace

        provisionerPod := &v1.Pod{
                ObjectMeta: metav1.ObjectMeta{
                        Name: string(va.action) + "-" + va.name,
                },

Since the Provisioner Pod is privileged, please create a Provisioner Pod in the namespace csi-driver-lvm so as not to add annotations to the default namespace

@majst01
Copy link
Contributor

majst01 commented Jul 14, 2023

It is up to you where the provisioner is deployed, the driver itself has not preference

@aivanov-citc
Copy link
Author

We did several checks on deploying test pods to different namespaces and made sure that the provisioner pod always runs in the "default" namespace.
How can we manage it?

@majst01
Copy link
Contributor

majst01 commented Jul 16, 2023

We did several checks on deploying test pods to different namespaces and made sure that the provisioner pod always runs in the "default" namespace. How can we manage it?

I have problems understanding what you are aiming for, maybe you can create a PR which shows the Problem.

@aivanov-citc
Copy link
Author

I'm trying to deploy a test pod

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: csi-lvm-system
spec:
  containers:
  - name: hello-container
    image: busybox
    command: ["sh","-c","sleep 3600"]
    volumeMounts:
    - mountPath: /mnt/store
      name: storage
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: storage-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: storage-claim
  namespace: csi-lvm-system
spec:
  storageClassName: csi-driver-lvm-linear
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

In whatever namespace I deploy this pod to "default", "test", "csi-lvm-system", the pod responsible for creating lv (create-pvc- ххххххх) is always deployed in the "default" namespace. Since "create-pvc-ххххххх" is privileged, it would be logical to create it in the namespace of the "csi-lvm-system" driver itself, apply annotations only to it, and not to the default namespace

$ kubectl get pods -A
csi-lvm-system      busybox                                           0/1     Pending             0                 2s
default             create-pvc-dd2780e5-8b79-4620-b9c9-c5420a76abf0   0/1     ContainerCreating   0                 1s

@Gerrit91
Copy link
Contributor

Maybe we can just create a pull request for a flag (--namespace), which passes the namespace on to the provisioner pod metadata.

We can use environment field refs for injecting the namespace in our manifests and helm-charts like:

        - name: CSI_DRIVER_LVM_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

This resolves the problem, right?

@aivanov-citc
Copy link
Author

I think yes, it does. Thank you

@Gerrit91 Gerrit91 self-assigned this Jul 20, 2023
@Gerrit91
Copy link
Contributor

Hey @aivanov-citc,

I just looked at the problem and found out a few things.

There is already a --namespace flag in the plugin (see here). The flag is used for provisioner pod to be deployed in the given namespace: https://github.com/metal-stack/csi-driver-lvm/blob/v0.5.2/pkg/lvm/lvm.go#L395.

The flag is set through the helm-chart automatically: https://github.com/metal-stack/helm-charts/blob/v0.3.32/charts/csi-driver-lvm/templates/plugin.yaml#L176. Did you deploy this project through our helm repo? Otherwise, maybe you missed setting the existing --namespace flag for the lvm plugin?

In #93, I created a branch that activates Pod Security on the Kind cluster. For the integration tests, I deployed the driver to a dedicated csi-driver-lvm namespace. During the integration tests, you can see that the provisioner pods are correctly deployed to the same plugin's namespace and not to the default namespace:

❯ k get po -A 
NAMESPACE            NAME                                                   READY   STATUS              RESTARTS   AGE
csi-driver-lvm       create-pvc-7a7013ea-1b39-464d-baf7-50dad87a356b        0/1     ContainerCreating   0          1s
csi-driver-lvm       csi-driver-lvm-controller-0                            3/3     Running             0          9s
csi-driver-lvm       csi-driver-lvm-plugin-b4265                            3/3     Running             0          9s
default              volume-test                                            0/1     Pending             0          1s
default              volume-test-inline-xfs                                 0/1     Terminating         0          49m

@aivanov-citc
Copy link
Author

Hey @Gerrit91.
So it is, I'm sorry, I did not see it. I close issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants