Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: Scaling up fails if mount directory contains some files #54

Closed
nak3 opened this issue Jul 31, 2018 · 1 comment
Closed

cephfs: Scaling up fails if mount directory contains some files #54

nak3 opened this issue Jul 31, 2018 · 1 comment

Comments

@nak3
Copy link
Contributor

nak3 commented Jul 31, 2018

description

When target directory for cephfs has some files, ceph-fuse fails to mount without nonempty option. Therefore, scaling up pods also fails once the volume contained files.

steps to reproduce

1. Run one pod with with fuse mount

  $ kubectl get pod web-server-76bdb8c758-b9m7d 
  NAME                          READY     STATUS    RESTARTS   AGE
  web-server-76bdb8c758-b9m7d   1/1       Running   0          3m

  $ kubectl exec -it web-server-76bdb8c758-b9m7d bash
  root@web-server-76bdb8c758-b9m7d:/# mount |grep ceph  
  ceph-fuse on /var/lib/www/html type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

2. make a file under the mounted volume in the

  (inside pod)
  root@web-server-76bdb8c758-b9m7d:/# touch /var/lib/www/html/test
  root@web-server-76bdb8c758-b9m7d:/# exit

3. scale up the pod

  $ kubectl scale --replicas=2 deployment web-server

Result

  • Failed to mount the volume:
  $ kubectl get pod
  web-server-76bdb8c758-b9m7d   1/1       Running             0          25m
  web-server-76bdb8c758-w499x   0/1       ContainerCreating   0          18m

  $  kubectl logs csi-cephfsplugin-4sbnr csi-cephfsplugin
  ...
  ceph-fuse[452]: starting ceph client
  fuse: mountpoint is not empty
  fuse: if you are sure this is safe, use the 'nonempty' mount option
  ceph-fuse[452]: fuse failed to start
  2018-07-31 06:46:31.268 7fd5523cec00 -1 fuse_mount(mountpoint=/var/lib/kubelet/plugins/csi-cephfsplugin/controller/volumes/vol-8f3200c0-948c-11e8-9d90-54e1ad486e52) failed.
@nak3
Copy link
Contributor Author

nak3 commented Jul 31, 2018

Just FYI, same error could be produced by local command.

// Add some file under target directory.
]# touch /mnt/cephfs/test

// Try to mount cephfs to /mnt/cephfs. (and it fails)
]# ceph-fuse  /mnt/cephfs  -m 10.64.222.11:6789   -c ceph.client.admin.keyring
2018-07-31 02:47:39.038282 7f7d3457a0c0 -1 init, newargv = 0x55c54341ee40 newargc=9ceph-fuse[12016]: starting ceph client

fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
ceph-fuse[12016]: fuse failed to start
2018-07-31 02:47:39.057991 7f7d3457a0c0 -1 fuse_mount(mountpoint=/mnt/cephfs) failed.

@rootfs rootfs closed this as completed in #55 Aug 7, 2018
agarwal-mudit pushed a commit to agarwal-mudit/ceph-csi that referenced this issue Jan 27, 2022
…k-53-to-release-4.10

[release-4.10] Sync rhs/ceph-csi:devel with ceph/ceph-csi:devel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant