Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SELinux prevents local-path provisioner PV dirs from being cleaned up #10130

Closed
zc-devs opened this issue May 21, 2024 · 5 comments
Closed

SELinux prevents local-path provisioner PV dirs from being cleaned up #10130

zc-devs opened this issue May 21, 2024 · 5 comments
Assignees
Milestone

Comments

@zc-devs
Copy link
Contributor

zc-devs commented May 21, 2024

Environmental Info:
K3s Version:
v1.29.4+k3s1 (94e29e2e)

Node(s) CPU architecture, OS, and Version:
Linux l7-1 5.15.0-206.153.7.el9uek.x86_64 #2 SMP Thu May 9 15:59:05 PDT 2024 x86_64 x86_64 x86_64 GNU/Linux
k3s-selinux.noarch 1.5-1.el9

Cluster Configuration:
1 node / server

Describe the bug:
After applying changes from #9964, PV creates, but cannot be deleted - the helper pod fails. Therefore, PV stucks in Released state.

Steps To Reproduce:

  1. Install K3s which supply Local path provisioner v0.26 with enabled SELinux:
# cat /etc/rancher/k3s/config.yaml
selinux: true
disable:
  - servicelb
  - local-storage
disable-cloud-controller: true
disable-kube-proxy: true
disable-network-policy: true
flannel-backend: none
...
INSTALL_K3S_VERSION='v1.29.4+k3s1' ./k3s.sh
  1. Deploy Local path provisioner with changes from Update local-path-provisioner helper script #9964.
  2. Disable dontaudit rules
semodule -DB
  1. Create test pvc.yaml.
kubectl apply -f pvc.yaml
  1. Create test pod.yaml.
kubectl apply -f pod.yaml
  1. Delete Pod.
kubectl delete -f pod.yaml
  1. Delete PVC.
kubectl delete -f pvc.yaml
  1. Check PV: still exist in Released state.
  2. Check helper-pod-delete-pvc-*: in failed state.
  3. Check SELinux audit log and see:
# ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts recent
----
time->Tue May 21 20:15:35 2024
type=PROCTITLE msg=audit(1716315335.918:5498): proctitle=726D002D7266002F7661722F6C69622F72616E636865722F6B33732F73746F726167652F7076632D31343335363431662D666236652D346136392D393238352D6639353639376533346636395F6B7562652D73797374656D5F746573742D707663
type=SYSCALL msg=audit(1716315335.918:5498): arch=c000003e syscall=257 success=no exit=-13 a0=ffffff9c a1=7ffe336612a2 a2=90800 a3=0 items=0 ppid=1041561 pid=1041573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rm" exe="/bin/rm" subj=system_u:system_r:container_t:s0:c637,c664 key=(null)
type=AVC msg=audit(1716315335.918:5498): avc:  denied  { read } for  pid=1041573 comm="rm" name="pvc-1435641f-fb6e-4a69-9285-f95697e34f69_kube-system_test-pvc" dev="dm-3" ino=3277505 scontext=system_u:system_r:container_t:s0:c637,c664 tcontext=system_u:object_r:container_file_t:s0:c583,c757 tclass=dir permissive=0

Expected behavior:

  1. Persistent Volume deletes successfully.
  2. There are no failed helper pods.
  3. There are no related records in SELinux audit log.

Actual behavior:

  1. Persistent Volume doesn't delete and stays in Released state.
  2. Failed helper-pod-delete-pvc-*.
  3. There is access denied record in SELinux audit log.

Additional context / logs:
#9833

  1. Local path provisioner v0.24 works.
  2. Disabling SELinux (setenforce 0) helps with v0.26.
@brandond brandond changed the title SELinux prevents PVs from deletion SELinux prevents local-path provisioner PV dirs from being cleaned up May 21, 2024
@brandond
Copy link
Contributor

cc @galal-hussein

@galal-hussein
Copy link
Contributor

The log doesnt make a lot of sense in my opinion, the denial is a read access from source (container_t) to (container_file_t) which should be available if container-selinux is available, can you check the following:

rpm -qa | grep selinux

also

semodule -l

I need to see what version of container-selinux version is used and if the k3s-selinux is actually loaded not only installed

@zc-devs
Copy link
Contributor Author

zc-devs commented May 22, 2024

can you check

Sure:
rpm.txt
semodule.txt

@galal-hussein
Copy link
Contributor

Okay I think I have figured out the problem:

the problem is related MCS labels, and multi container access on the same file:

 ps auxZ | grep local-path
system_u:system_r:container_t:s0:c608,c1006 root 10449 0.0  0.9 1265432 36588 ?  Ssl  18:05   0:00 local-path-provisioner start --config /etc/config/config.json

As you can see local-path-provisioner started with MCS label s0:c608,c1006 however the helper pod that tries to delete the files has started with MCS label s0:c212,c497 which doesnt have access to that file category

type=AVC msg=audit(1716402374.003:883): avc:  denied  { read } for  pid=14858 comm="ls" name="pvc-31023300-a21e-45ea-b177-70d8f26389b6_kube-system_test-pvc" dev="xvda4" ino=184552397 scontext=system_u:system_r:container_t:s0:c212,c497 tcontext=system_u:object_r:container_file_t:s0:c310,c608 tclass=dir permissive=0

for more info https://www.redhat.com/en/blog/how-selinux-separates-containers-using-multi-level-security

The fix is simply giving the helper pod more range of security context MCS categories, I have added a PR to fix that in local-path-provisioner to make it a permanent solution

@VestigeJ
Copy link

VestigeJ commented Jun 6, 2024

Reproduced with selinux in audit

Validated with COMMIT=d9b8ba8d7109ca098c379d170eb412879c5ee94e

type=AVC msg=audit(1717714359.907:540): avc:  denied  { remove_name } for  pid=7527 comm="rm" name="pvc-34f0f671-74e7-4637-bc8d-621c631cda7d_kube-system_test-pvc" dev="xvda3" ino=92284451 scontext=system_u:system_r:container_t:s0:c62,c578 tcontext=system_u:object_r:var_lib_t:s0 tclass=dir permissive=1

//showing expanded file categories
$ sudo ps auxZ | grep local-path

system_u:system_r:container_t:s0:c483,c762 root 3740 0.0  0.8 1265184 33320 ?  Ssl  23:34   0:00 local-path-provisioner start --config /etc/config/config.json

$ kg pv,pvc,pod -A

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-956284c7-c03a-4955-a634-a7bb4b0960d3   10Mi       RWO            Delete           Bound    kube-system/test-pvc   local-path     <unset>                          4m28s

NAMESPACE     NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
kube-system   persistentvolumeclaim/test-pvc   Bound    pvc-956284c7-c03a-4955-a634-a7bb4b0960d3   10Mi       RWO            local-path     <unset>                 4m31s

NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/coredns-576bfc4dc7-67gmf                  1/1     Running     0          5m51s
kube-system   pod/helm-install-traefik-95djw                0/1     Completed   1          5m52s
kube-system   pod/helm-install-traefik-crd-pfdj8            0/1     Completed   0          5m52s
kube-system   pod/local-path-provisioner-86f46b7bf7-xsmpb   1/1     Running     0          5m51s
kube-system   pod/metrics-server-557ff575fb-x7bqh           1/1     Running     0          5m51s
kube-system   pod/svclb-traefik-9a2525f6-hg9nv              2/2     Running     0          5m36s
kube-system   pod/test-pod                                  1/1     Running     0          4m31s
kube-system   pod/traefik-5fb479b77-pdp75                   1/1     Running     0          5m37s

$ k delete -f pvc.yaml -f podpvc.yaml

persistentvolumeclaim "test-pvc" deleted
pod "test-pod" deleted

$ sudo ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts recent | grep -i denied

type=AVC msg=audit(1717717275.427:887): avc:  denied  { remove_name } for  pid=7378 comm="rm" name="pvc-956284c7-c03a-4955-a634-a7bb4b0960d3_kube-system_test-pvc" dev="xvda3" ino=26007582 scontext=system_u:system_r:container_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_lib_t:s0 tclass=dir permissive=1

@VestigeJ VestigeJ closed this as completed Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done Issue
Development

No branches or pull requests

5 participants