Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ineffective configuration option in StorageClass "mountOptions:[noatime]" when using cephfs as backend storageclass #1506

Closed
PrometheusYu opened this issue Sep 23, 2020 · 7 comments
Labels
component/cephfs Issues related to CephFS wontfix This will not be worked on

Comments

@PrometheusYu
Copy link

Environment:
Ceph cluster: 14.2.7 nautilus (stable)
Kubernetes: v1.16.3
Ceph CSI version: v2.1.1

We use helm chart to deploy ceph-csi-cephfs-plugin in namespace: kube-system
$ helm list -n kube-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ceph-csi-cephfs kube-system 4 2020-07-14 10:20:07.048355676 +0800 CST deployed ceph-csi-cephfs-2.1.1-canary v2.1.1

Which including one DaemonSet named 'ceph-csi-cephfs-nodeplugin' and one Deployment named 'ceph-csi-cephfs-provisioner'

Then I create one StorageClass with yaml:

kind: StorageClass
metadata:
  name: csi-cephfs-sc-noatime
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: 29e5e193-4fd2-4f98-b08a-8afee4a1a7bd
  fsName: cephfs
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug
  - noatime
  - _netdev

After that, I create one deployment.yaml(using nginx image and applying for PersistentVolumeClaim to mount to directory '/usr/share/nginx/html'), this PersistentVolumeClaim direct to StorageClass 'csi-cephfs-sc-noatime', yaml file appears as following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nginx
  namespace: kube-monitor
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10G
  storageClassName: csi-cephfs-sc-noatime

When the deployment finished and login inside the container, type command 'mount|grep fuse':

ceph-fuse on /usr/share/nginx/html type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

It seems that mount option about 'noatime' does not work.

Then in the output log of container 'csi-cephfsplugin', we can find:

 csi-cephfsplugin I0923 05:59:37.156925       1 utils.go:159] ID: 112512 GRPC call: /csi.v1.Node/NodeGetCapabilities
 csi-cephfsplugin I0923 05:59:37.156951       1 utils.go:160] ID: 112512 GRPC request: {}
 csi-cephfsplugin I0923 05:59:37.158292       1 utils.go:165] ID: 112512 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
 csi-cephfsplugin I0923 05:59:37.172295       1 utils.go:159] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC call: /csi.v1.Node/NodeStageVolume
 csi-cephfsplugin I0923 05:59:37.172313       1 utils.go:160] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77
 b3a5069163/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["debug","noatime","_netdev"]}},"access_mode":{"mode":5}},"vol
 ume_context":{"clusterID":"29e5e193-4fd2-4f98-b08a-8afee4a1a7bd","fsName":"cephfs","storage.kubernetes.io/csiProvisionerIdentity":"1600132272856-8081-cephfs.csi.
 ceph.com"},"volume_id":"0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-96c2-82dd802f6fa7"}
 csi-cephfsplugin I0923 05:59:37.174404       1 util.go:48] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-
 96c2-82dd802f6fa7 cephfs: EXEC ceph [-m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 --id admin --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs
  dump --format=json]
 csi-cephfsplugin I0923 05:59:37.735689       1 util.go:48] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-
 96c2-82dd802f6fa7 cephfs: EXEC ceph [-m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 --id admin --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs
  ls --format=json]
 csi-cephfsplugin I0923 05:59:37.755997       1 utils.go:159] ID: 112514 GRPC call: /csi.v1.Identity/Probe
 csi-cephfsplugin I0923 05:59:37.756025       1 utils.go:160] ID: 112514 GRPC request: {}
 csi-cephfsplugin I0923 05:59:37.756911       1 utils.go:165] ID: 112514 GRPC response: {}
 csi-cephfsplugin I0923 05:59:38.962412       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
 csi-cephfsplugin I0923 05:59:38.962432       1 mount_linux.go:174] systemd-run failed with: exit status 1
 csi-cephfsplugin I0923 05:59:38.962441       1 mount_linux.go:175] systemd-run output: Failed to create bus connection: No such file or directory
 csi-cephfsplugin I0923 05:59:38.962545       1 volumemounter.go:208] requested mounter: , chosen mounter: fuse
csi-cephfsplugin I0923 05:59:38.962584       1 nodeserver.go:151] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd6
 1-11ea-96c2-82dd802f6fa7 cephfs: mounting volume 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-96c2-82dd802f6fa7 with Ceph F
 USE driver
 csi-cephfsplugin I0923 05:59:38.962638       1 util.go:48] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-
 96c2-82dd802f6fa7 cephfs: EXEC ceph-fuse [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/globalmount -m 10.142.139.100:67
 89,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** -r /volumes/csi/csi-vol-f2edc26c-fd61-11ea-96c2-82dd80
 2f6fa7 -o nonempty --client_mds_namespace=cephfs]
 csi-cephfsplugin I0923 05:59:39.063414       1 nodeserver.go:129] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd6
 1-11ea-96c2-82dd802f6fa7 cephfs: successfully mounted volume 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-96c2-82dd802f6fa7
  to /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/globalmount
 csi-cephfsplugin I0923 05:59:39.063450       1 utils.go:165] ID: 112513 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC response: {}
 csi-cephfsplugin I0923 05:59:39.070270       1 utils.go:159] ID: 112515 GRPC call: /csi.v1.Node/NodeGetCapabilities
 csi-cephfsplugin I0923 05:59:39.070288       1 utils.go:160] ID: 112515 GRPC request: {}
 csi-cephfsplugin I0923 05:59:39.071051       1 utils.go:165] ID: 112515 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
 csi-cephfsplugin I0923 05:59:39.076788       1 utils.go:159] ID: 112516 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC call: /csi.v1.Node/NodePublishVolume
 csi-cephfsplugin I0923 05:59:39.076812       1 utils.go:160] ID: 112516 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/globalmount","ta
 rget_path":"/var/lib/kubelet/pods/ed4a7a27-6af3-4d95-9f93-46b8a6b2e044/volumes/kubernetes.io~csi/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/mount","volume_capabili
 ty":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["debug","noatime","_netdev"]}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"29e5e193-4fd2
 -4f98-b08a-8afee4a1a7bd","fsName":"cephfs","storage.kubernetes.io/csiProvisionerIdentity":"1600132272856-8081-cephfs.csi.ceph.com"},"volume_id":"0001-0024-29e5e1
 93-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-96c2-82dd802f6fa7"}
 csi-cephfsplugin I0923 05:59:39.080427       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
 csi-cephfsplugin I0923 05:59:39.080445       1 mount_linux.go:174] systemd-run failed with: exit status 1
 csi-cephfsplugin I0923 05:59:39.080453       1 mount_linux.go:175] systemd-run output: Failed to create bus connection: No such file or directory
 csi-cephfsplugin I0923 05:59:39.080497       1 util.go:48] ID: 112516 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-
 96c2-82dd802f6fa7 cephfs: EXEC mount [-o bind,_netdev,debug,noatime /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/global
 mount /var/lib/kubelet/pods/ed4a7a27-6af3-4d95-9f93-46b8a6b2e044/volumes/kubernetes.io~csi/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/mount]
 csi-cephfsplugin I0923 05:59:39.083272       1 nodeserver.go:209] ID: 112516 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd6
 1-11ea-96c2-82dd802f6fa7 cephfs: successfully bind-mounted volume 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11ea-96c2-82dd802
 f6fa7 to /var/lib/kubelet/pods/ed4a7a27-6af3-4d95-9f93-46b8a6b2e044/volumes/kubernetes.io~csi/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/mount
 csi-cephfsplugin I0923 05:59:39.084670       1 utils.go:165] ID: 112516 Req-ID: 0001-0024-29e5e193-4fd2-4f98-b08a-8afee4a1a7bd-0000000000000001-f2edc26c-fd61-11e
 a-96c2-82dd802f6fa7 GRPC response: {}

So is there anyone who is able to give me a hand about this, so appreciated in advance.

@nixpanic
Copy link
Member

The mount commands that are executed do list the options correctly:

"mount_flags":["debug","noatime","_netdev"]

It is possible that the options passed to fuse are not available on the output of mount (or /proc/mounts). In order to check that the options are passed, you will need to check the running ceph-fuse process and its commandline parameters. In the csi-cephfsplugin container you should be able to run ps axfu | grep ceph-fuse or similar to get the details.

@nixpanic nixpanic added the component/cephfs Issues related to CephFS label Sep 23, 2020
@PrometheusYu
Copy link
Author

The mount commands that are executed do list the options correctly:

"mount_flags":["debug","noatime","_netdev"]

It is possible that the options passed to fuse are not available on the output of mount (or /proc/mounts). In order to check that the options are passed, you will need to check the running ceph-fuse process and its commandline parameters. In the csi-cephfsplugin container you should be able to run ps axfu | grep ceph-fuse or similar to get the details.

Well, I used to check the ceph-fuse process in container csi-cephfsplugin of which node the mount action took place. I got printout as following:

ps auxf |grep ceph-fuse
root     19279  0.0  0.0   9096   672 pts/0    S+   07:55   0:00  \_ grep --color=auto ceph-fuse
root       203  0.3  0.1 4211048 54416 ?       Sl   Sep10  74:54 ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-03fc9f5b-d545-4d06-96ba-52624ceb0b1a/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-086589673 -r /volumes/csi/csi-vol-409620d4-f0df-11ea-83c6-ee13fb4b456d -o nonempty --client_mds_namespace=cephfs
root      3427  0.0  0.2 1587388 89908 ?       Sl   Sep11   7:14 ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e3ed2c14-c45e-4f64-8b66-f92fe67e718b/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-504227701 -r /volumes/csi/csi-vol-45569102-f40e-11ea-83c6-ee13fb4b456d -o nonempty --client_mds_namespace=cephfs
root     18822  0.0  0.0 1595576 40824 ?       Sl   Sep15   4:41 ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d3456573-e390-4175-b797-1ae5ca435c1d/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-621772587 -r /volumes/csi/csi-vol-fab4ecb4-98e1-11ea-9d18-5613cc9db0fd -o nonempty --client_mds_namespace=cephfs
root     18920  0.0  0.0 1095852 13624 ?       Sl   05:59   0:01 ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fc72a88-b573-4f75-b302-77b3a5069163/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-176132935 -r /volumes/csi/csi-vol-f2edc26c-fd61-11ea-96c2-82dd802f6fa7 -o nonempty --client_mds_namespace=cephfs

From which, We can find that in this ceph-fuse command, for option '-o', in fact, there is no parameter about noatime,

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 23, 2020

you need to set fuseMountOptions

# fuseMountOptions: debug

@PrometheusYu
Copy link
Author

PrometheusYu commented Sep 24, 2020

you need to set fuseMountOptions

# fuseMountOptions: debug

Thanks for your advice. @Madhu-1
But after following your instruction, there may be something wrong during the parameter passing process:
Modified storageclass.yaml appears as following:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc-noatime
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cephfs.csi.ceph.com
parameters:
  **fuseMountOptions: noatime**
  clusterID: 29e5e193-4fd2-4f98-b08a-8afee4a1a7bd
  fsName: cephfs
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

Then deploy the demo.yaml file, I can find the error described as following:

  Warning  FailedMount             9s                 kubelet, worker2.kt1.hk.sbibits.com  MountVolume.MountDevice failed for volume "pvc-5670f0b7-9d8a-46c9-bd5
6-57cf0d1820b0" : rpc error: code = Internal desc = an error occurred while running (23600) ceph-fuse [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5670f0b
7-9d8a-46c9-bd56-57cf0d1820b0/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***str
ipped*** -r /volumes/csi/csi-vol-bab14e2b-fe03-11ea-96c2-82dd802f6fa7 -o nonempty ,noatime --client_mds_namespace=cephfs]: exit status 22: fuse: invalid argumen
t `,noatime'                                                                                                                                                    
ceph-fuse[23609]: fuse failed to initialize2020-09-24 01:17:44.270 7fce6f57fe00 -1 init, newargv = 0x55bf0df21f10 newargc=10                                    
                                                                                                                                                                
2020-09-24 01:17:44.270 7fce6f57fe00 -1 fuse_parse_cmdline failed.                                                                                              
  Warning  FailedMount  7s  kubelet, worker2.kt1.hk.sbibits.com  MountVolume.MountDevice failed for volume "pvc-5670f0b7-9d8a-46c9-bd56-57cf0d1820b0" : rpc erro
r: code = Internal desc = an error occurred while running (23794) ceph-fuse [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5670f0b7-9d8a-46c9-bd56-57cf0d182
0b0/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** -r /volumes/csi/c
si-vol-bab14e2b-fe03-11ea-96c2-82dd802f6fa7 **-o nonempty ,noatime** --client_mds_namespace=cephfs]: exit status 22: 2020-09-24 01:17:46.591 7f077808ee00 -1 init, n
ewargv = 0x557477ecff10 newargc=10fuse: invalid argument `,noatime'                                                                                             
                                                                                                                                                                
2020-09-24 01:17:46.592 7f077808ee00 -1 fuse_parse_cmdline failed.                                                                                              
ceph-fuse[23803]: fuse failed to initialize

This may be a little weird right? For multiple mount options, there should be several '-o' option like

ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-03fc9f5b-d545-4d06-96ba-52624ceb0b1a/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-025574877 -r /volumes/csi/csi-vol-409620d4-f0df-11ea-83c6-ee13fb4b456d -o nonempty -o noatime --client_mds_namespace=cephfs

instead of

ceph-fuse /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-03fc9f5b-d545-4d06-96ba-52624ceb0b1a/globalmount -m 10.142.139.100:6789,10.142.139.101:6789,10.142.139.102:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=/tmp/csi/keys/keyfile-025574877 -r /volumes/csi/csi-vol-409620d4-f0df-11ea-83c6-ee13fb4b456d -o nonempty,noatime --client_mds_namespace=cephfs

Is that right?
Or the root cause is just because there is one space symbol: "-o nonempty[space],noatime" ?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 24, 2020

space between mount options is fixed with #1499 which will part of the next release. and we are using sinlge -o for all the mount options.

@nixpanic PTAL

@stale
Copy link

stale bot commented Dec 25, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Dec 25, 2020
@stale
Copy link

stale bot commented Jul 21, 2021

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@stale stale bot closed this as completed Jul 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants