Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: run rook-operator and toolbox with non-root user #8734

Closed
adabuleanu opened this issue Sep 16, 2021 · 11 comments · Fixed by #8744
Closed

feat: run rook-operator and toolbox with non-root user #8734

adabuleanu opened this issue Sep 16, 2021 · 11 comments · Fixed by #8744
Assignees
Labels
Projects

Comments

@adabuleanu
Copy link

Is this a bug report or feature request?

  • Feature Request

What should the feature do:
Run rook with ceph storage provider as non-root and with configurable UID/GID.

I want to know if this feature is already implement for all ceph components and how (I did not find any docs on it).

From my findings:

--setuser user
will apply the appropriate user ownership to the file specified by the option ‘-o’.

--setgroup group
will apply the appropriate group ownership to the file specified by the option ‘-o’.

What is use case behind this feature:
In an enterprise environment, running containers as root is a security concern.

Environment:

rook with ceph on top of k8s

@BlaineEXE
Copy link
Member

All Ceph containers run as the ceph:ceph user:group. If you do a kubectl pod describe of any Ceph daemon pod, you will see the Ceph options used to select the user and group.

From the ticket you linked, I can see that Seb "closed this in #2778", which is the PR that adds the feature.

I believe this addresses your issue and is a duplicate of #2778, so I am closing this with the 'duplicate' label.

@adabuleanu
Copy link
Author

Indeed if you describe specific pods, they run with user and group ceph

$ kubectl get pod -n kube-system  rook-ceph-mgr-a-594dfd4d76-7bqrm  -o yaml
...
  - args:
    - --fsid=acec90b7-7fed-48a5-80a8-c74c77605db2
    - --keyring=/etc/ceph/keyring-store/keyring
    - --log-to-stderr=true
    - --err-to-stderr=true
    - --mon-cluster-log-to-stderr=true
    - '--log-stderr-prefix=debug '
    - --default-log-to-file=false
    - --default-mon-cluster-log-to-file=false
    - --mon-host=$(ROOK_CEPH_MON_HOST)
    - --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS)
    - --id=a
    - --setuser=ceph
    - --setgroup=ceph
    - --client-mount-uid=0
    - --client-mount-gid=0
    - --foreground
    command:
    - ceph-mgr

This is reflected at the process level as well

ps aux | grep ceph-mgr
167      21984  2.9 54.7 48955040 36041776 ?   Ssl  Aug30 751:07 ceph-mgr --fsid=acec90b7-7fed-48a5-80a8-c74c77605db2 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.68:3300,v1:10.10.10.68:6789],[v2:10.10.10.6:3300,v1:10.10.10.6:6789],[v2:10.10.10.64:3300,v1:10.10.10.64:6789] --mon-initial-members=c,a,b --id=a --setuser=ceph --setgroup=ceph --client-mount-uid=0 --client-mount-gid=0 --foreground

But most of the rook components run as root

$ ssh node-1 'ps aux | egrep "ceph|rook"'
root       13708  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-msgr]
root      242386  0.0  0.1 715388 23492 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.rbd.csi.ceph.com/csi.sock
root      242421  0.0  0.5 2586796 92400 ?       Ssl  Sep15   0:44 /usr/local/bin/cephcsi --nodeid=node-1 --endpoint=unix:///csi/csi.sock --v=0 --type=rbd --nodeserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      242490  0.0  0.3 1781220 58832 ?       Ssl  Sep15   0:16 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
root      242596  0.0  0.1 715644 23296 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.cephfs.csi.ceph.com/csi.sock
root      242714  0.0  0.5 2218644 88940 ?       Ssl  Sep15   0:42 /usr/local/bin/cephcsi --nodeid=node-1 --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      242859  0.0  0.3 1781536 61896 ?       Ssl  Sep15   0:15 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
root      243099  0.0  0.3 1781472 63200 ?       Ssl  Sep15   0:16 /usr/local/bin/cephcsi --nodeid=node-1 --type=cephfs --endpoint=unix:///csi/csi-provisioner.sock --v=0 --controllerserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      243169  0.0  0.3 1707484 57916 ?       Ssl  Sep15   0:13 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi-provisioner.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
167       244582  1.3  2.3 1547924 387256 ?      Ssl  Sep15  23:30 ceph-mgr --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789],[v2:10.10.10.172:3300,v1:10.10.10.172:6789],[v2:10.10.10.112:3300,v1:10.10.10.112:6789] --mon-initial-members=a,b,c --id=a --setuser=ceph --setgroup=ceph --client-mount-uid=0 --client-mount-gid=0 --foreground
root      245882  0.0  0.0  44572 12040 ?        Ss   Sep15   0:00 /usr/libexec/platform-python -s /usr/bin/ceph-crash
167       245888  0.6  1.9 1206212 311968 ?      Ssl  Sep15  11:01 ceph-osd --foreground --id 1 --fsid 4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --setuser ceph --setgroup ceph --crush-location=root=default host=node-1 --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false
root      251090  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      251091  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      265047  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      265048  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      278830  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      278831  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]

$ ssh node-2 'ps aux | egrep "ceph|rook"'
root      256678  0.0  0.1 715388 20736 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.rbd.csi.ceph.com/csi.sock
root      256756  0.0  0.5 2521772 94224 ?       Ssl  Sep15   0:49 /usr/local/bin/cephcsi --nodeid=node-2 --endpoint=unix:///csi/csi.sock --v=0 --type=rbd --nodeserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      256865  0.0  0.3 1781220 59924 ?       Ssl  Sep15   0:16 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
root      257070  0.0  0.1 715388 21668 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.cephfs.csi.ceph.com/csi.sock
root      257170  0.0  0.5 2291868 90908 ?       Ssl  Sep15   0:45 /usr/local/bin/cephcsi --nodeid=node-2 --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      257262  0.0  0.3 1781728 60904 ?       Ssl  Sep15   0:13 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
root      257355  0.0  0.5 2521260 92540 ?       Ssl  Sep15   0:33 /usr/local/bin/cephcsi --nodeid=node-2 --endpoint=unix:///csi/csi-provisioner.sock --v=0 --type=rbd --controllerserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      257400  0.0  0.4 1781476 65016 ?       Ssl  Sep15   0:15 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi-provisioner.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
167       260016  0.6  1.1 1250832 182184 ?      Ssl  Sep15  11:17 ceph-osd --foreground --id 2 --fsid 4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --setuser ceph --setgroup ceph --crush-location=root=default host=node-2 --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false
167       260804  0.1  0.2 431040 45056 ?        Ssl  Sep15   2:34 ceph-mds --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789],[v2:10.10.10.172:3300,v1:10.10.10.172:6789],[v2:10.10.10.112:3300,v1:10.10.10.112:6789] --mon-initial-members=a,b,c --id=rook-ceph-ceph-fs-a --setuser=ceph --setgroup=ceph --foreground
root      260873  0.0  0.0  44572 11844 ?        Ss   Sep15   0:00 /usr/libexec/platform-python -s /usr/bin/ceph-crash
root      279760  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-msgr]
root      279766  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      279767  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      279951  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      279952  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      280085  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      280086  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
167       579226  0.7  1.6 694200 260316 ?       Ssl  13:10   1:16 ceph-mon --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789],[v2:10.10.10.172:3300,v1:10.10.10.172:6789],[v2:10.10.10.112:3300,v1:10.10.10.112:6789] --mon-initial-members=a,b,c --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.10.10.172 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db

$ ssh node-3 'ps aux | egrep "ceph|rook"'
root      259655  0.0  0.1 715388 23516 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.rbd.csi.ceph.com/csi.sock
root      259690  0.0  0.5 2521772 92276 ?       Ssl  Sep15   0:46 /usr/local/bin/cephcsi --nodeid=node-3 --endpoint=unix:///csi/csi.sock --v=0 --type=rbd --nodeserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      259810  0.0  0.3 1781216 63236 ?       Ssl  Sep15   0:17 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
root      260013  0.0  0.1 715644 21236 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.cephfs.csi.ceph.com/csi.sock
root      260089  0.0  0.3 1781732 62176 ?       Ssl  Sep15   0:27 /usr/local/bin/cephcsi --nodeid=node-3 --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      260207  0.0  0.4 1781216 65696 ?       Ssl  Sep15   0:18 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
root      260295  0.0  0.3 1707492 61284 ?       Ssl  Sep15   0:19 /usr/local/bin/cephcsi --nodeid=node-3 --endpoint=unix:///csi/csi-provisioner.sock --v=0 --type=rbd --controllerserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      260382  0.0  0.3 1781216 58816 ?       Ssl  Sep15   0:18 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi-provisioner.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
167       261644  0.9  5.7 1395504 938028 ?      Ssl  Sep15  16:48 ceph-mon --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789],[v2:10.10.10.172:3300,v1:10.10.10.172:6789],[v2:10.10.10.112:3300,v1:10.10.10.112:6789] --mon-initial-members=a,b,c --id=c --setuser=ceph --setgroup=ceph --foreground --public-addr=10.10.10.112 --setuser-match-path=/var/lib/ceph/mon/ceph-c/store.db
167       263136  0.6  1.8 1128716 306292 ?      Ssl  Sep15  10:57 ceph-osd --foreground --id 3 --fsid 4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --setuser ceph --setgroup ceph --crush-location=root=default host=node-3 --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false
167       264036  0.2  0.2 446404 46036 ?        Ssl  Sep15   4:12 ceph-mds --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789],[v2:10.10.10.172:3300,v1:10.10.10.172:6789],[v2:10.10.10.112:3300,v1:10.10.10.112:6789] --mon-initial-members=a,b,c --id=rook-ceph-ceph-fs-b --setuser=ceph --setgroup=ceph --foreground
root      264145  0.0  0.0  44572 11928 ?        Ss   Sep15   0:00 /usr/libexec/platform-python -s /usr/bin/ceph-crash
root      267048  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-msgr]
root      283733  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      283734  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      455455  0.0  0.0      0     0 ?        I    15:48   0:00 [kworker/1:0-ceph-msgr]
root      463114  0.0  0.0      0     0 ?        I    15:51   0:00 [kworker/7:3-ceph-msgr]
root      827037  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      827038  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]

$ ssh node-4 'ps aux | egrep "ceph|rook"'
root      244919  0.0  0.0   4348   704 ?        Ss   Sep15   0:02 /tini -- /usr/local/bin/rook ceph operator
root      244930  0.2  0.5 761272 83740 ?        Sl   Sep15   4:25 /usr/local/bin/rook ceph operator
root      245776  0.0  0.1 715388 22996 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.rbd.csi.ceph.com/csi.sock
root      245817  0.0  0.5 2522740 93232 ?       Ssl  Sep15   0:49 /usr/local/bin/cephcsi --nodeid=node-4 --endpoint=unix:///csi/csi.sock --v=0 --type=rbd --nodeserver=true --drivername=kube-system.rbd.csi.ceph.com --pidlimit=-1 --metricsport=9090 --metricspath=/metrics --enablegrpcmetrics=false
root      245858  0.0  0.3 1780960 61156 ?       Ssl  Sep15   0:15 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9080 --metricspath=/metrics --polltime=60s --timeout=3s
root      245986  0.0  0.1 714236 21496 ?        Ssl  Sep15   0:02 /csi-node-driver-registrar --v=0 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/kube-system.cephfs.csi.ceph.com/csi.sock
root      246025  0.0  0.4 1781728 67904 ?       Ssl  Sep15   0:30 /usr/local/bin/cephcsi --nodeid=node-4 --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      246176  0.0  0.3 1781472 64540 ?       Ssl  Sep15   0:18 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
root      246441  0.0  0.5 2292120 89564 ?       Ssl  Sep15   0:35 /usr/local/bin/cephcsi --nodeid=node-4 --type=cephfs --endpoint=unix:///csi/csi-provisioner.sock --v=0 --controllerserver=true --drivername=kube-system.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=false
root      246485  0.0  0.3 1781472 61008 ?       Ssl  Sep15   0:17 /usr/local/bin/cephcsi --type=liveness --endpoint=unix:///csi/csi-provisioner.sock --metricsport=9081 --metricspath=/metrics --polltime=60s --timeout=3s
167       247843  1.4  5.8 1398728 941608 ?      Ssl  Sep15  24:47 ceph-mon --fsid=4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.10.10.7:3300,v1:10.10.10.7:6789] --mon-initial-members=a --id=a --setuser=ceph --setgroup=ceph --foreground --public-addr=10.10.10.7 --setuser-match-path=/var/lib/ceph/mon/ceph-a/store.db
root      249887  0.0  0.0  44572 11960 ?        Ss   Sep15   0:00 /usr/libexec/platform-python -s /usr/bin/ceph-crash
167       250526  0.6  1.4 1142368 227972 ?      Ssl  Sep15  10:28 ceph-osd --foreground --id 0 --fsid 4b8c33bf-eaa1-45fa-a350-c40b17c04e2a --setuser ceph --setgroup ceph --crush-location=root=default host=node-4 --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug  --default-log-to-file=false --default-mon-cluster-log-to-file=false
root      272684  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-msgr]
root      272690  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      272691  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]
root      272711  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-watch-noti]
root      272712  0.0  0.0      0     0 ?        I<   Sep15   0:00 [ceph-completion]

These are the rook/ceph pods

$ kubectl get pods -A
NAMESPACE                        NAME                                                              READY   STATUS      RESTARTS   AGE
kube-system                      csi-cephfsplugin-25gzc                                            3/3     Running     0          28h
kube-system                      csi-cephfsplugin-clsgl                                            3/3     Running     0          28h
kube-system                      csi-cephfsplugin-fhs4t                                            3/3     Running     0          28h
kube-system                      csi-cephfsplugin-jx79w                                            3/3     Running     0          28h
kube-system                      csi-cephfsplugin-provisioner-79bf84647f-4p5wp                     6/6     Running     24         28h
kube-system                      csi-cephfsplugin-provisioner-79bf84647f-qvtzv                     6/6     Running     21         28h
kube-system                      csi-rbdplugin-fmfxx                                               3/3     Running     0          28h
kube-system                      csi-rbdplugin-gfkp9                                               3/3     Running     0          28h
kube-system                      csi-rbdplugin-lvntm                                               3/3     Running     0          28h
kube-system                      csi-rbdplugin-mkftp                                               3/3     Running     0          28h
kube-system                      csi-rbdplugin-provisioner-86b8997d98-7zxvh                        6/6     Running     18         28h
kube-system                      csi-rbdplugin-provisioner-86b8997d98-hstz2                        6/6     Running     25         28h
kube-system                      rook-ceph-ceph-tools-0                                            1/1     Running     0          28h
kube-system                      rook-ceph-crashcollector-node-1-7d97bcc8b9-n46dt                  1/1     Running     0          28h
kube-system                      rook-ceph-crashcollector-node-2-9d8bd9694-l6bxq                   1/1     Running     0          28h
kube-system                      rook-ceph-crashcollector-node-3-7db56fbc49-twlg9                  1/1     Running     0          28h
kube-system                      rook-ceph-crashcollector-node-4-f4859486b-grht4                   1/1     Running     0          28h
kube-system                      rook-ceph-mds-rook-ceph-ceph-fs-a-559cb54765-pnthk                1/1     Running     0          28h
kube-system                      rook-ceph-mds-rook-ceph-ceph-fs-b-68f775dddf-nmbdr                1/1     Running     0          28h
kube-system                      rook-ceph-mgr-a-5f877dcdc7-zkrd7                                  1/1     Running     0          28h
kube-system                      rook-ceph-mon-a-68566dccf5-dffft                                  1/1     Running     0          28h
kube-system                      rook-ceph-mon-b-6b47947489-k27cx                                  1/1     Running     1          28h
kube-system                      rook-ceph-mon-c-76c96fc6d-jcpgp                                   1/1     Running     0          28h
kube-system                      rook-ceph-operator-5c6476c56b-98l5r                               1/1     Running     0          28h
kube-system                      rook-ceph-osd-0-7cbf49c85-9d59j                                   1/1     Running     0          28h
kube-system                      rook-ceph-osd-1-76c779b965-cr5js                                  1/1     Running     0          28h
kube-system                      rook-ceph-osd-2-6bfbf8f9df-6mrkl                                  1/1     Running     0          28h
kube-system                      rook-ceph-osd-3-55c7f6464-8pzzb                                   1/1     Running     0          28h
kube-system                      rook-ceph-osd-prepare-node-1-dvr7s                                0/1     Completed   0          28h
kube-system                      rook-ceph-osd-prepare-node-2-crh9h                                0/1     Completed   0          28h
kube-system                      rook-ceph-osd-prepare-node-3-s65gg                                0/1     Completed   0          28h
kube-system                      rook-ceph-osd-prepare-node-4-mmdvb                                0/1     Completed   0          28h

Also in the feature request I wanted to know about configurable UID/GID as well.

@adabuleanu
Copy link
Author

adabuleanu commented Sep 16, 2021

Can you please reopen this ticket since #2778 refers only for specific ceph components (mon, mgr, mds, osd) but all the other components run as root

@travisn
Copy link
Member

travisn commented Sep 16, 2021

@adabuleanu All the ceph daemons are running as ceph. It's the CSI driver processes that are running as root, which don't support running as a different user.

@adabuleanu
Copy link
Author

@travisn do you plan to support such a feature in the future?

@leseb
Copy link
Member

leseb commented Sep 17, 2021

@travisn do you plan to support such a feature in the future?

Please open an issue at https://github.com/ceph/ceph-csi/issues so ceph-csi maintainer can help.

@adabuleanu
Copy link
Author

Will do. Thx. Also, rook-ceph-tools and rook-operator also run as root. Is this something you can address on your side?

@adabuleanu
Copy link
Author

Opened ceph/ceph-csi#2519 on ceph-csi. Also, I want to know if there is any intention to make the ceph daemons run with configurable UID/GID, since ceph supports this. Right now, the ceph user is hardcoded. Thx.

@leseb
Copy link
Member

leseb commented Sep 17, 2021

Opened ceph/ceph-csi#2519 on ceph-csi. Also, I want to know if there is any intention to make the ceph daemons run with configurable UID/GID, since ceph supports this. Right now, the ceph user is hardcoded. Thx.

No plan to use a different user than ceph at the moment. Is this a concern? If so, please elaborate. Thanks.

@adabuleanu
Copy link
Author

No plan to use a different user than ceph at the moment. Is this a concern? If so, please elaborate. Thanks.

Here a couple of arguments for configurable UID/GID:

  • consistency to run a single user the entire solution (not just rook with ceph)
  • security wise, you have the ability to target security policies to a certain user
  • as general approach, most of the upstream helm charts offer the option to configure a securityContext.

Also, are there any plans to run rook-ceph-tools and rook-operator as non-root?

@leseb
Copy link
Member

leseb commented Sep 17, 2021

No plan to use a different user than ceph at the moment. Is this a concern? If so, please elaborate. Thanks.

Here a couple of arguments for configurable UID/GID:

  • consistency to run a single user the entire solution (not just rook with ceph)
  • security wise, you have the ability to target security policies to a certain user
  • as general approach, most of the upstream helm charts offer the option to configure a securityContext.

Also, are there any plans to run rook-ceph-tools and rook-operator as non-root?

I think running the rook-op and the toolbox has non-root is a reasonable target. I don't remember any reason to require root. I've rephrased the title of the issue and we will try to work on this.

@leseb leseb reopened this Sep 17, 2021
@leseb leseb changed the title run rook with ceph storage provider as non-root feat: run rook-operator and toolbox with non-root user Sep 17, 2021
@leseb leseb removed the duplicate label Sep 17, 2021
@leseb leseb added this to To do in v1.8 via automation Sep 17, 2021
@leseb leseb self-assigned this Sep 17, 2021
leseb added a commit to leseb/rook that referenced this issue Sep 17, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 17, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 17, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
@leseb leseb moved this from To do to Review in progress in v1.8 Sep 17, 2021
leseb added a commit to leseb/rook that referenced this issue Sep 17, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 22, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 22, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 22, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 23, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 24, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 27, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Sep 28, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 4, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 4, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 4, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 5, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 5, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
leseb added a commit to leseb/rook that referenced this issue Nov 5, 2021
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
v1.8 automation moved this from Review in progress to Done Nov 5, 2021
parth-gr pushed a commit to parth-gr/rook that referenced this issue Feb 22, 2022
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
parth-gr pushed a commit to parth-gr/rook that referenced this issue Feb 22, 2022
The rook operator as well as the toolbox pod run with the "rook" user
with UID 2016. The UID was chosen based on the year of the initial
commit in the rook/rook repository.
No more root user running.

Closes: rook#8734
Signed-off-by: Sébastien Han <seb@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
v1.8
Done
Development

Successfully merging a pull request may close this issue.

4 participants