Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

[nfs-provisioner] Quota not working #855

Closed
moonek opened this issue Jul 12, 2018 · 11 comments
Closed

[nfs-provisioner] Quota not working #855

moonek opened this issue Jul 12, 2018 · 11 comments
Labels
area/nfs lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@moonek
Copy link
Contributor

moonek commented Jul 12, 2018

Hi,

I'm testing nfs-provisioner quota feature.
Through the document below.

  • enable-xfs-quota - If the provisioner will set xfs quotas for each volume it provisions. Requires that the directory it creates volumes in ('/export') is xfs mounted with option prjquota/pquota, and that it has the privilege to run xfs_quota. Default false.

I've tested it to make sure that quota is guaranteed in PV units, but it seems to share the entire nfs volume quota.

This is my procedure.

  1. nfs-server settings (10GB LV)
root@nfs-server:/mnt$ lvs
  LV                                     VG                                  Attr       LSize    Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LV_crash                               VGROOT                              -wi-ao----   12.00g
  LV_root                                VGROOT                              -wi-ao----   50.00g
  LV_swap                                VGROOT                              -wi-ao----    8.00g
  docker-pool                            VGROOT                              twi-aot--- <124.61g                                            73.95  36.50
  moon-test                              vg_c4784eed9c59c5bf1549bc5fa090c114 -wi-a-----   10.00g
root@nfs-server:/mnt$ mkdir /mnt/moon-test
root@nfs-server:/mnt$ mount -o prjquota /dev/vg_c4784eed9c59c5bf1549bc5fa090c114/moon-test /mnt/moon-test
root@nfs-server:/mnt$ mount | grep moon-test
/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test on /mnt/moon-test type xfs (rw,relatime,attr2,inode64,prjquota)
root@nfs-server:/mnt$
  1. deploy nfs-provisioner
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-provisioner
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      nodeSelector:
        kubernetes.io/hostname: nfs-server
      containers:
        - name: nfs-provisioner
          image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9
          ports:
            - name: nfs
              containerPort: 2049
            - name: mountd
              containerPort: 20048
            - name: rpcbind
              containerPort: 111
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
          securityContext:
            privileged: true
            capabilities:
              add:
                - DAC_READ_SEARCH
                - SYS_RESOURCE
          args:
            - "-provisioner=example.com/nfs"
            - "-enable-xfs-quota=true"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: nfs-provisioner
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: export-volume
              mountPath: /export
      volumes:
        - name: export-volume
          hostPath:
            path: /mnt/moon-test
  1. make storageClass, 1Gi pvc, sample app
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: example-nfs
provisioner: example.com/nfs
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
  namespace: moon
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: example-nfs
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
  namespace: moon
  (...skip...)
        volumeMounts:
        - mountPath: /mnt
          name: data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs-pvc
  1. PV was successfully created and mounted on the app.
root@jumphost:/root$ kubectl get pv | grep example-nfs
pvc-e8a8e853-85be-11e8-bb88-00505623ebd7           1Gi        RWX            Delete           Bound         moon/nfs-pvc                                                     example-nfs              6m
root@jumphost:/root$ kubectl get pvc,po -n moon
NAME                            STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-pvc   Bound     pvc-e8a8e853-85be-11e8-bb88-00505623ebd7   1Gi        RWX            example-nfs    6m

NAME                        READY     STATUS    RESTARTS   AGE
pod/nginx-5894d7489-57kdw   1/1       Running   0          6m
  1. When I attach the container and look at df -h, I can see the total capacity of nfs lv and I have confirmed that I can write more than 1Gi.
root@jumphost:/root$ kubectl exec -it nginx-5894d7489-57kdw bash -n moon
root@nginx-5894d7489-57kdw:/# df -h
Filesystem                                                                                           Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:0-201364590-ae8c0bc6569e65ad5fea3713a973ea52c78b3400715ff413b37ac14ab2ebdad8   10G  162M  9.9G   2% /
tmpfs                                                                                                 32G     0   32G   0% /dev
tmpfs                                                                                                 32G     0   32G   0% /sys/fs/cgroup
10.107.64.244:/export/pvc-e8a8e853-85be-11e8-bb88-00505623ebd7                                        10G   32M   10G   1% /mnt
/dev/mapper/VGROOT-LV_root                                                                            50G   36G   15G  71% /etc/hosts
shm                                                                                                   64M     0   64M   0% /dev/shm
tmpfs                                                                                                 32G   12K   32G   1% /run/secrets/kubernetes.io/serviceaccount
root@nginx-5894d7489-57kdw:/# cd /mnt
root@nginx-5894d7489-57kdw:/mnt# dd if=/dev/zero of=f1 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 34.0608 s, 61.6 MB/s
root@nginx-5894d7489-57kdw:/mnt# dd if=/dev/zero of=f2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 14.9372 s, 70.2 MB/s
root@nginx-5894d7489-57kdw:/mnt# ls
f1  f2
root@nginx-5894d7489-57kdw:/mnt# du -sh
3.0G    .
root@nginx-5894d7489-57kdw:/mnt#

Additional information for nfs-server.

root@nfs-server:/mnt/moon-test$ ll
total 20
-rw-r--r-- 1 root root 5152 Jul 12 19:32 ganesha.log
-rw------- 1 root root   36 Jul 12 19:14 nfs-provisioner.identity
-rw-r--r-- 1 root root   63 Jul 12 19:32 projects
drwxrwsrwx 2 root root   26 Jul 12 19:55 pvc-e8a8e853-85be-11e8-bb88-00505623ebd7
-rw------- 1 root root  902 Jul 12 19:32 vfs.conf
root@nfs-server:/mnt/moon-test$ cat ganesha.log
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] main :MAIN :EVENT :nfs-ganesha Starting: Ganesha Version /nfs-ganesha-2.4.0.3/src, built at Dec 20 2017 23:00:29 on
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] claim_posix_filesystems :FSAL :CRIT :Could not stat directory for path /nonexistent
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] vfs_create_export :FSAL :CRIT :resolve_posix_filesystem(/nonexistent) returned No such file or directory (2)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] fsal_cfg_commit :CONFIG :CRIT :Could not create export for (/nonexistent) to (/nonexistent)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] export_commit_common :CONFIG :CRIT :Export id 0 can only export "/" not (/nonexistent)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:28): 1 validation errors in block FSAL
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:28): Errors processing block (FSAL)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:12): 1 validation errors in block EXPORT
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] config_errs_to_log :CONFIG :CRIT :Config File (/export/vfs.conf:12): Errors processing block (EXPORT)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs4_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
12/07/2018 10:14:08 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
12/07/2018 10:15:38 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE
12/07/2018 10:32:42 : epoch 5b4729f0 : nfs-provisioner-5b465c656-77mhm : nfs-ganesha-33[dbus_heartbeat] export_commit_common :EXPORT :CRIT :Clients = (0x7f75c444d0e8,0x7f75c444d0e8) next = (0x7f75c444d0e8, 0x7f75c444d0e8)
root@nfs-server:/mnt/moon-test$ cat projects

1:/export/pvc-e8a8e853-85be-11e8-bb88-00505623ebd7:1073741824
root@nfs-server:/mnt/moon-test$ xfs_quota -x -c state
User quota state on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
  Accounting: OFF
  Enforcement: OFF
  Inode: #0 (0 blocks, 0 extents)
Group quota state on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
  Accounting: OFF
  Enforcement: OFF
  Inode: #67 (1 blocks, 1 extents)
Project quota state on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
  Accounting: ON
  Enforcement: ON
  Inode: #67 (1 blocks, 1 extents)
Blocks grace time: [7 days]
Inodes grace time: [7 days]
Realtime Blocks grace time: [7 days]
root@nfs-server:/mnt/moon-test$ xfs_quota -V
xfs_quota version 4.5.0

Here is provisioner log

root@jumphost:/root$ kubectl logs -f nfs-provisioner-5b465c656-77mhm -n kube-system
I0712 10:14:08.228539       1 main.go:62] Provisioner example.com/nfs specified
I0712 10:14:08.244098       1 main.go:82] Setting up NFS server!
I0712 10:14:08.558216       1 server.go:144] starting RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0712 10:14:08.558253       1 server.go:155] ending RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0712 10:14:08.566688       1 server.go:129] Running NFS server!
I0712 10:14:13.604579       1 controller.go:407] Starting provisioner controller 537f9a64-85bc-11e8-9930-023de89e57d6!
I0712 10:32:42.861026       1 controller.go:1084] scheduleOperation[lock-provision-moon/nfs-pvc[e8a8e853-85be-11e8-bb88-00505623ebd7]]
I0712 10:32:42.868686       1 controller.go:1084] scheduleOperation[lock-provision-moon/nfs-pvc[e8a8e853-85be-11e8-bb88-00505623ebd7]]
I0712 10:32:42.872433       1 leaderelection.go:156] attempting to acquire leader lease...
I0712 10:32:42.880659       1 leaderelection.go:178] successfully acquired lease to provision for pvc moon/nfs-pvc
I0712 10:32:42.880708       1 controller.go:1084] scheduleOperation[provision-moon/nfs-pvc[e8a8e853-85be-11e8-bb88-00505623ebd7]]
I0712 10:32:42.889349       1 provision.go:421] using service SERVICE_NAME=nfs-provisioner cluster IP 10.107.64.244 as NFS server IP
I0712 10:32:42.923837       1 controller.go:817] volume "pvc-e8a8e853-85be-11e8-bb88-00505623ebd7" for claim "moon/nfs-pvc" created
I0712 10:32:42.927335       1 controller.go:834] volume "pvc-e8a8e853-85be-11e8-bb88-00505623ebd7" for claim "moon/nfs-pvc" saved
I0712 10:32:42.927354       1 controller.go:870] volume "pvc-e8a8e853-85be-11e8-bb88-00505623ebd7" provisioned for claim "moon/nfs-pvc"
I0712 10:32:44.886815       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc moon/nfs-pvc, task succeeded
@wongma7
Copy link
Contributor

wongma7 commented Jul 12, 2018

Thanks for your detailed logs.
What's the output of xfs_quota -x -c 'report /mnt/moon-test' from the nfs-server?
What if you manually manipulate the quota from nfs-server? e.g. run commands like (I'm sorry, I"m not sure if these exact commands are correct) to enable the project quota
xfs_quota -x -c 'project -s -p /export/pvc-e8a8e853-85be-11e8-bb88-00505623ebd7 1' /mnt/moon-test or to change the limit xfs_quota -x -c 'limit -p bhard=1234 1' /mnt/moon-test

Truth is, this feature is more a proof-of-concept (all of nfs-provisioner is, to some extent, but this feature more than anything else), and I have not tested it since its implementation. Of course, it has not changed since its implementation and neither has xfs so I'm at a loss as to why it doesn't work; if any xfs_quota commands failed I would expect to see them in the nfs-provisioner log.

@moonek
Copy link
Contributor Author

moonek commented Jul 13, 2018

Thank you for quick response.
I also don't know xfs_quota, but it seems to be working manually.
I made a new volume. (pvc-d22fc16e-8639-11e8-9a0e-005056151053)

root@nfs-server:/mnt/moon-test$ ll
total 24
-rw-r--r-- 1 root root 10304 Jul 13 10:13 ganesha.log
-rw------- 1 root root    36 Jul 12 19:14 nfs-provisioner.identity
-rw-r--r-- 1 root root    63 Jul 13 10:12 projects
drwxrwsrwx 2 root root     6 Jul 13 10:12 pvc-d22fc16e-8639-11e8-9a0e-005056151053
-rw------- 1 root root   902 Jul 13 10:12 vfs.conf
root@nfs-server:/mnt/moon-test$ cat projects

1:/export/pvc-d22fc16e-8639-11e8-9a0e-005056151053:1073741824
root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'report /mnt/moon-test'
Project quota on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0                 24          0          0     00 [--------]

I did the command you gave me, but the /export directory is a directory inside the container, so host did not recognize it.

root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'project -s -p /export/pvc-d22fc16e-8639-11e8-9a0e-005056151053 1' /mnt/moon-test
xfs_quota: cannot setup path for project dir /export/pvc-d22fc16e-8639-11e8-9a0e-005056151053: No such file or directory
root@nfs-server:/mnt/moon-test$

So when I created the project with the host path, I saw that the project was added.

root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'project -s -p /mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053 1' /mnt/moon-test
Setting up project 1 (path /mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053)...
Processed 1 (/etc/projects and cmdline) paths for project 1 with recursion depth infinite (-1).
root@nfs-server:/mnt/moon-test$
root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'report /mnt/moon-test'
Project quota on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0                 24          0          0     00 [--------]
#1                  0          0          0     00 [--------]

After manually setting the quota on the newly added project, I confirmed that quota works in that directory.

root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'limit -p bhard=1024m 1' /mnt/moon-test
root@nfs-server:/mnt/moon-test$ xfs_quota -x -c 'report /mnt/moon-test'
Project quota on /mnt/moon-test (/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0                 24          0          0     00 [--------]
#1                  0          0    1048576     00 [--------]

root@nfs-server:/mnt/moon-test$ cd pvc-d22fc16e-8639-11e8-9a0e-005056151053/
root@nfs-server:/mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053$
root@nfs-server:/mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053$ dd if=/dev/zero of=f1 bs=1M count=2000
dd: error writing ‘f1’: No space left on device
1025+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.5307 s, 164 MB/s
root@nfs-server:/mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053$ du -sh
1.0G    .
root@nfs-server:/mnt/moon-test/pvc-d22fc16e-8639-11e8-9a0e-005056151053$

It appears that the project information generated inside the nfs-provisioner container is not passed to the host.

@wongma7
Copy link
Contributor

wongma7 commented Jul 13, 2018

Hm, yeah I mixed my commands and they have 1/2 container paths mixed with 1/2 host paths. What if we try running commands from the container?
xfs_quota -x -c 'project -s -p /export/pvc-d22fc16e-8639-11e8-9a0e-005056151053 1' /export
Is it mounted with the prjquota option inside the container? mount | grep export

A privileged container should have no problems manipulating the quota. None of the commandserrored, so something else must be wrong. I thought initially that maybe /mnt/moon-test must be changed /export so that the path on the host and the container match otherwise the host can't resolve the path, but it was not a limitation when I tested this feature.

@moonek
Copy link
Contributor Author

moonek commented Jul 16, 2018

If I run it inside a container, it looks like this:

[root@nfs-provisioner-5b465c656-22c5c ~]# xfs_quota -x -c 'project -s -p /export/pvc-60e989eb-8897-11e8-9a0e-005056151053 1' /export
xfs_quota: cannot setup path for mount /export: No such device or address

Mount is printed as prjquota.
However, nothing is printed with the xfs_quota command.

[root@nfs-provisioner-5b465c656-22c5c ~]# mount | grep /export
/dev/mapper/vg_c4784eed9c59c5bf1549bc5fa090c114-moon--test on /export type xfs (rw,relatime,attr2,inode64,prjquota)
[root@nfs-provisioner-5b465c656-22c5c ~]# xfs_quota -x -c 'report /export'
[root@nfs-provisioner-5b465c656-22c5c ~]# xfs_quota -x -c state
[root@nfs-provisioner-5b465c656-22c5c ~]# xfs_quota -x -c print
[root@nfs-provisioner-5b465c656-22c5c ~]# cd /export
[root@nfs-provisioner-5b465c656-22c5c export]# ls
ganesha.log  nfs-provisioner.identity  projects  pvc-60e989eb-8897-11e8-9a0e-005056151053  vfs.conf
[root@nfs-provisioner-5b465c656-22c5c export]#

The deployment yaml is in the body and the container runtime is also privileged.

root@jumphost:/root$ kubectl get po nfs-provisioner-5b465c656-22c5c -n kube-system -oyaml
    (...skip...)
    securityContext:
      capabilities:
        add:
        - DAC_READ_SEARCH
        - SYS_RESOURCE
      privileged: true

@bitnik
Copy link

bitnik commented Dec 10, 2018

Hi, I tried this feature out and I had the same problem. And today I was reading the documentation again and I saw in https://github.com/kubernetes-incubator/external-storage/blob/nfs-provisioner-v2.2.0-k8s1.12/nfs/docs/deployment.md#deploying-the-provisioner:

Running outside of Kubernetes as a standalone container or binary is for when you want greater control over the app's lifecycle and/or the ability to set per-PV quotas.

Above example is deployed in kubernetes and I did the same.

Does this mean that right now only way to have quotas working is to deploy nfs provisioner outside of kubernetes?

@akram
Copy link

akram commented Dec 11, 2018

Hi all,

@bitnik it would make sense to me also, since setting quota requires root privlilged.
@wongma7 if I understand correctly, the Kubernetes Deployment object should also specify securityContect: { privileged: true }

It is apparently not the case. IMHO, the xfs_quota is probably failing silently

@wongma7
Copy link
Contributor

wongma7 commented Dec 17, 2018

@bitnik @akram @moonek it should work with privileged true, I just tested it locally on fedora (hack/local-up-cluster.sh). Do the nfs-provisioner logs show anything? As an example, here is what I see after I create a PVC that request 1MiB from the provisioner where the XFS file system at /tmp/xfs-mtpt is mounted to /export

$ mount
...
/tmp/file.fs on /tmp/xfs-mtpt type xfs (rw,relatime,seclabel,attr2,inode64,prjquota)
...

$ cat /tmp/xfs-mtpt/projects 

1:/export/pvc-483444e2-0245-11e9-b5f3-408d5ce681c7:1048576

~ $ dd if=/dev/zero of=/tmp/xfs-mtpt/pvc-483444e2-0245-11e9-b5f3-408d5ce681c7/abcd bs=1024 count=102400
dd: error writing '/tmp/xfs-mtpt/pvc-483444e2-0245-11e9-b5f3-408d5ce681c7/abcd': No space left on device
1025+0 records in
1024+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650466 s, 161 MB/s

$ sudo xfs_quota -x -c 'report /tmp/xfs-mtpt/pvc-483444e2-0245-11e9-b5f3-408d5ce681c7/'
Project quota on /tmp/xfs-mtpt (/dev/loop6)
                               Blocks                     
Project ID       Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
#0               1244          0          0     00 [--------]
#1               1024          0       1024     00 [--------]

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/nfs lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants