Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cinder-csi-plugin] Cinder CSI Plugin doesn't honor anymore securityContext/fsGroup #1362

Closed
eumel8 opened this issue Jan 9, 2021 · 14 comments · Fixed by #1366
Closed

[cinder-csi-plugin] Cinder CSI Plugin doesn't honor anymore securityContext/fsGroup #1362

eumel8 opened this issue Jan 9, 2021 · 14 comments · Fixed by #1366
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@eumel8
Copy link
Contributor

eumel8 commented Jan 9, 2021

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

In Kubernetes Container Specs it should be possible to set mount ownerships in securityContext:

     securityContext:
        fsGroup: 101

Driver: cinder.csi.openstack.org
Driver version: 1.2.1

exec pod:

bash-5.0$ id
uid=101(nginx) gid=101(nginx) groups=101(nginx)
bash-5.0$ mount | grep nginx
/dev/vde on /usr/share/nginx/html type ext4 (rw,relatime,data=ordered)
bash-5.0$ ls -l /usr/share/nginx/
total 4
drwxr-xr-x    3 root     root          4096 Jan  9 01:33 html

What you expected to happen:

Driver: cinder.csi.openstack.org version: 1.2.0

exec pod:

bash-5.0$ id
uid=101(nginx) gid=101(nginx) groups=101(nginx)
bash-5.0$ mount | grep nginx
/dev/vde on /usr/share/nginx/html type ext4 (rw,relatime,data=ordered)
bash-5.0$ ls -l /usr/share/nginx/
total 4
drwxrwsr-x    3 root     nginx         4096 Jan  9 01:20 html

How to reproduce it:

deploy pod with pvc attached and securityContext/fsGroup:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demoapp-deployment
spec:
  selector:
    matchLabels:
      app: demoapp
  replicas: 1
  template:
    metadata:
      labels:
        app: demoapp
    spec:
      containers:
      - name: demoapp
        image: mtr.external.otc.telekomcloud.com/mcsps/nginx-non-root:master
        ports:
        - containerPort: 8080
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          runAsNonRoot: true
          runAsUser: 101
          runAsGroup: 101
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: demoapp-volume
      nodeSelector:
        role: worker
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      volumes:
        - name: demoapp-volume
          persistentVolumeClaim:
            claimName: demoapp-volume
      securityContext:
        fsGroup: 101
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demoapp-volume
spec:
  accessModes:
  - ReadWriteOnce
  dataSource: null
  resources:
    requests:
      storage: 1Gi
  storageClassName: sata
  volumeMode: Filesystem

Anything else we need to know?:

The last working version is release-1.18:
https://github.com/kubernetes/cloud-provider-openstack/blob/release-1.18/pkg/volume/cinder/cinder.go#L403

Or the internal K8s cinder driver:
https://github.com/kubernetes/kubernetes/blob/6d5cb36d36f34cb4f5735b6adcd5ea8ebb4440ba/pkg/volume/cinder/cinder.go#L451

But the code is complete reworked since 1.19. Is there now another option what I missed?

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: 1.19
  • cinder-csi-plugin: 1.2.1
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 9, 2021
@jichenjc
Copy link
Contributor

/assign

I can give a try

@jichenjc
Copy link
Contributor

jichenjc commented Jan 11, 2021

I can't reproduce this error, can you provide additional info on the version you are using?
I think my output is same to your expected result?

bash-5.0$ id
uid=101(nginx) gid=101(nginx) groups=101(nginx)
bash-5.0$ mount | grep nginx
/dev/vde on /usr/share/nginx/html type ext4 (rw,relatime,data=ordered)
bash-5.0$ ls -l /usr/share/nginx/
total 4
drwxrwsr-x    3 root     nginx         4096 Jan 11 03:30 html
bash-5.0$

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demoapp-deployment
spec:
  selector:
    matchLabels:
      app: demoapp
  replicas: 1
  template:
    metadata:
      labels:
        app: demoapp
    spec:
      containers:
      - name: demoapp
        image: mtr.external.otc.telekomcloud.com/mcsps/nginx-non-root:master
        ports:
        - containerPort: 8080
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          runAsNonRoot: true
          runAsUser: 101
          runAsGroup: 101
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: demoapp-volume
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      volumes:
        - name: demoapp-volume
          persistentVolumeClaim:
            claimName: demoapp-volume
      securityContext:
        fsGroup: 101
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demoapp-volume
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-sc-cinderplugin

the image I am using is 1.20.0

root@jjtest1:~/go/src/github.com/cloud-provider-openstack/examples/cinder-csi-plugin# kubectl describe pod csi-cinder-controllerplugin-0 -n kube-system | grep Image:
    Image:         k8s.gcr.io/sig-storage/csi-attacher:v2.2.1
    Image:         k8s.gcr.io/sig-storage/csi-provisioner:v1.6.1
    Image:         k8s.gcr.io/sig-storage/csi-snapshotter:v2.1.3
    Image:         k8s.gcr.io/sig-storage/csi-resizer:v0.5.1
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
    Image:         docker.io/k8scloudprovider/cinder-csi-plugin:v1.20.0
root@jjtest1:~/go/src/github.com/cloud-provider-openstack/examples/cinder-csi-plugin# kubectl describe pod csi-cinder-nodeplugin-b2kc7 -n kube-system | grep Image:
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
    Image:         docker.io/k8scloudprovider/cinder-csi-plugin:v1.20.0

@ramineni
Copy link
Contributor

ramineni commented Jan 11, 2021

The last working version is release-1.18:
https://github.com/kubernetes/cloud-provider-openstack/blob/release-1.18/pkg/volume/cinder/cinder.go#L403

Or the internal K8s cinder driver:
https://github.com/kubernetes/kubernetes/blob/6d5cb36d36f34cb4f5735b6adcd5ea8ebb4440ba/pkg/volume/cinder/cinder.go#L451

But the code is complete reworked since 1.19. Is there now another option what I missed?

FYI, This code is not part of csi plugin, this is of standalone cinder provisioner which is deprecated and removed in further releases

@eumel8
Copy link
Contributor Author

eumel8 commented Jan 11, 2021

The last working version is release-1.18:
https://github.com/kubernetes/cloud-provider-openstack/blob/release-1.18/pkg/volume/cinder/cinder.go#L403

Or the internal K8s cinder driver:
https://github.com/kubernetes/kubernetes/blob/6d5cb36d36f34cb4f5735b6adcd5ea8ebb4440ba/pkg/volume/cinder/cinder.go#L451

But the code is complete reworked since 1.19. Is there now another option what I missed?

FYI, This code is not part of csi plugin, this is of standalone cinder provisioner which is deprecated and removed in further releases

yes, it's only for references how it was managed in the past.

@eumel8
Copy link
Contributor Author

eumel8 commented Jan 11, 2021

@jichenjc my used images:

$ kubectl -n kube-system describe pod csi-cinder-controllerplugin-0 | grep Image:
    Image:         mtr.external.otc.telekomcloud.com/mcsps/csi-attacher:v3.0.0
    Image:         mtr.external.otc.telekomcloud.com/mcsps/csi-provisioner:v2.0.2
    Image:         mtr.external.otc.telekomcloud.com/mcsps/csi-snapshotter:v3.0.0
    Image:         mtr.external.otc.telekomcloud.com/mcsps/csi-resizer:v1.0.0
    Image:         mtr.external.otc.telekomcloud.com/mcsps/cinder-csi-plugin-amd64:v1.19.2

$ kubectl -n kube-system describe pod csi-cinder-nodeplugin-fl795 | grep Image:
    Image:         mtr.external.otc.telekomcloud.com/mcsps/csi-node-driver-registrar:v2.0.1
    Image:         mtr.external.otc.telekomcloud.com/mcsps/cinder-csi-plugin-amd64:v1.19.2

for sidecar I have other versions regarding K8S support matrix: https://kubernetes-csi.github.io/docs/sidecar-containers.html

hint: the images are 1:1 synced to another registry

@spielkind
Copy link

One of the differences between both setups is the cs-provisioner 1.6.1 vs. 2.0.2,
https://github.com/kubernetes-csi/external-provisioner/blob/master/CHANGELOG/CHANGELOG-2.0.md
There is one change in:

The fstype on provisioned PVs no longer defaults to "ext4". A defaultFStype arg is added to the provisioner. Admins can also specify this fstype via storage class parameter. If fstype is set in storage class parameter, it will be used. The sidecar arg is only checked if fstype is not set in the SC param. (#400) @humblec)

Then I found this: https://kubernetes-csi.github.io/docs/support-fsgroup.html
As we didn't enabled this FeatureFlag in our 1.19 cluster I guess it's some how the default behavior:

If undefined, fsGroupPolicy will default to ReadWriteOnceWithFSType, keeping the previous behavior.

ReadWriteOnceWithFSType: Indicates that volumes will be examined to determine if volume ownership and permissions should be modified to match the pod's security policy. Changes will only occur if the fsType is defined and the persistent volume's accessModes contains ReadWriteOnly. .

I checked both:

  1. RWO is configured
  2. fsType is not explicitly set in our storage classes

I then just copied one of our storage classes and defined "fsType: ext4" - when using this storage class, I had no permission issues.

I'm not sure if this is kind of a bug, or a works as designed - and may just needs better documentation?

@eumel8
Copy link
Contributor Author

eumel8 commented Jan 11, 2021

interesting. @jichenjc can you share your storage class definition? Maybe it's only a documention issue. I have no special settings so far.

@jichenjc
Copy link
Contributor

jichenjc commented Jan 12, 2021

@eumel8 nothing special, just defined from here
https://github.com/kubernetes/cloud-provider-openstack/blob/master/examples/cinder-csi-plugin/nginx.yaml#L4
and it doesn't have the fstype you mentioned ...

ok, our manifest comes from
https://github.com/kubernetes/cloud-provider-openstack/blob/master/manifests
and guess that's the difference

and I am working on to bump the side car versions now

@jichenjc
Copy link
Contributor

jichenjc commented Jan 12, 2021

I switched to csi-provisioner to 2.0.2 and now I got same result to you and updated storageclass as you mentioned can fix the problem

bash-5.0$ id
uid=101(nginx) gid=101(nginx) groups=101(nginx)
bash-5.0$  mount | grep nginx
/dev/vde on /usr/share/nginx/html type ext4 (rw,relatime,data=ordered)
bash-5.0$ ls -l /usr/share/nginx/
total 4
drwxr-xr-x    3 root     root          4096 Jan 12 03:47 html

so I will submit a PR to see how to bump the version of provisioner

@ramineni
Copy link
Contributor

ramineni commented Jan 13, 2021

Changes will only occur if the fsType is defined and the persistent volume's accessModes contains ReadWriteOnly. .

@spielkind Thanks for sharing the inputs.

I then just copied one of our storage classes and defined "fsType: ext4" - when using this storage class, I had no permission issues.

Just to clarify again, if fsType is explicitly defined , then this issue is resolved, is what you meant?

@ramineni
Copy link
Contributor

@eumel8 We haven't updated the sidecars to 2.0.2 yet , we always suggest to use the versions specified in manifests which exists in respective versions. Like for 1.19 use the manifests in release-1.19 branch of cpo.

@jichenjc
Copy link
Contributor

Just to clarify again, if fsType is explicitly defined , then this issue is resolved, is what you meant?

check my comments above, with 2.0.2 , if no fsType below, then I got same error per reported by @eumel8 , if I add it , then the issue is gone ...

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-sc-cinderplugin
provisioner: cinder.csi.openstack.org
parameters:
  fsType: ext4

@ramineni
Copy link
Contributor

@jichenjc Ok, got it. Then its a doc update or we can check if there is a way to specify default fstype if none specified.

@jichenjc
Copy link
Contributor

The PR #1366 failed and I think it's related to in-tree code so I opened
kubernetes/kubernetes#98053 to see whether it's the right way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants