Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolumeClaim is pending when using Shared File System with csi #4012

Closed
dingsongjie opened this issue Sep 28, 2019 · 23 comments
Closed

Comments

@dingsongjie
Copy link

version 1.1.1
pvc logs

Name:          smartretail
Namespace:     smartretail
StorageClass:  smartretail
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type     Reason              Age   From                                                                                                              Message
  ----     ------              ----  ----                                                                                                              -------
  Warning  ProvisioningFailed  60m   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (14018) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-307e8ec8-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (14234) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-3245cb65-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (14450) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-346c51d9-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (14666) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-36ef6f5e-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (14882) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-3abc274c-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (15098) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-40d1b655-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  59m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (15314) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-4ba3da87-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  58m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (15530) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-5ff1abd7-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  57m  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (15746) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-87465630-e18e-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Warning  ProvisioningFailed  30m (x6 over 55m)  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  (combined from similar events): failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (17042) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-3f1e50ec-e192-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.65.186:6789,10.100.55.243:6789,10.107.179.172:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 22: Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 914, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 188, in handle_command
    return handler(inbuf, cmd)
  File "/usr/share/ceph/mgr/volumes/module.py", line 230, in _cmd_fs_subvolume_create
    mode=cmd.get('mode', '755'))
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 346, in conn_wrapper
    result = func(self, fs_h, **kwargs)
  File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 373, in create_subvolume
    sv.create_subvolume(spec, size, pool=pool, mode=self.octal_str_to_decimal_int(mode))
  File "/usr/share/ceph/mgr/volumes/fs/subvolume.py", line 82, in create_subvolume
    self.fs.setxattr(subvolpath, 'ceph.dir.layout.pool', pool.encode('utf-8'), 0)
  File "cephfs.pyx", line 1087, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.4/rpm/el7/BUILD/ceph-14.2.4/build/src/pybind/cephfs/pyrex/cephfs.c:11862)
InvalidValue: [Errno 22] error in setxattr
  Normal     ExternalProvisioning  2m21s (x233 over 60m)  persistentvolume-controller                                                                                       waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator
  Normal     Provisioning          47s (x21 over 60m)     rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  External provisioner is provisioning volume for claim "smartretail/smartretail"
Mounted By:  comteck-smartretail-api-archives-5498d68b87-6942b
             comteck-smartretail-api-archives-5498d68b87-zrbrx

@Madhu-1
Copy link
Member

Madhu-1 commented Sep 28, 2019

Min ceph version requited is 14.2.2 https://github.com/ceph/ceph-csi#support-matrix

@dingsongjie
Copy link
Author

dingsongjie commented Sep 28, 2019

Min ceph version requited is 14.2.2 https://github.com/ceph/ceph-csi#support-matrix

I use ceph/ceph:v14.2.4-20190917 as https://github.com/rook/rook/blob/v1.1.1/cluster/examples/kubernetes/ceph/cluster.yaml

Name:         rook-ceph
Namespace:    rook-ceph
Labels:       <none>
Annotations:  <none>
API Version:  ceph.rook.io/v1
Kind:         CephCluster
Metadata:
  Creation Timestamp:  2019-09-26T05:26:08Z
  Finalizers:
    cephcluster.ceph.rook.io
  Generation:        2316
  Resource Version:  1207157
  Self Link:         /apis/ceph.rook.io/v1/namespaces/rook-ceph/cephclusters/rook-ceph
  UID:               24e898f2-e01e-11e9-8ca1-408d5c2ff1e0
Spec:
  Ceph Version:
    Image:  ceph/ceph:v14.2.4-20190917
  Dashboard:
    Enabled:           true
    URL Prefix:        /rook
  Data Dir Host Path:  /var/lib/rook
  Disruption Management:
    Machine Disruption Budget Namespace:  openshift-machine-api
    Osd Maintenance Timeout:              30
  External:
    Enable:  false
  Mgr:
  Mon:
    Count:  3
  Monitoring:
  Network:
    Host Network:  false
    Provider:      
    Selectors:     <nil>
  Placement:
    All:
      Node Affinity:
        Required During Scheduling Ignored During Execution:
          Node Selector Terms:
            Match Expressions:
              Key:       kubernetes.io/hostname
              Operator:  In
              Values:
                server03
                server05
                server06
  Rbd Mirroring:
    Workers:  0
  Storage:
    Config:  <nil>
    Directories:
      Config:  <nil>
      Path:    /var/lib/rook
    Nodes:
      Config:  <nil>
      Directories:
        Config:  <nil>
        Path:    /var/lib/rook
      Name:      server03
      Resources:
        Limits:
          Cpu:     500m
          Memory:  1Gi
        Requests:
          Cpu:     500m
          Memory:  1Gi
      Config:      <nil>
      Directories:
        Config:  <nil>
        Path:    /var/lib/rook
      Name:      server05
      Resources:
        Limits:
          Cpu:     500m
          Memory:  1Gi
        Requests:
          Cpu:     500m
          Memory:  1Gi
      Config:      <nil>
      Directories:
        Config:  <nil>
        Path:    /var/lib/rook
      Name:      server06
      Resources:
        Limits:
          Cpu:     500m
          Memory:  1Gi
        Requests:
          Cpu:                  500m
          Memory:               1Gi
    Storage Class Device Sets:  <nil>
    Use All Devices:            false
Status:
  Ceph:
    Details:
      TOO FEW PGS:
        Message:      too few PGs per OSD (16 < min 30)
        Severity:     HEALTH_WARN
    Health:           HEALTH_WARN
    Last Changed:     2019-09-26T07:49:40Z
    Last Checked:     2019-09-28T02:53:09Z
    Previous Health:  HEALTH_ERR
  State:              Created
Events:               <none>

@Madhu-1
Copy link
Member

Madhu-1 commented Sep 28, 2019

Look like cephfs issue, @ajarr PTAL

@dingsongjie
Copy link
Author

Look like cephfs issue, @ajarr PTAL

After I reset CephCluster ,errors are change.

  Warning    ProvisioningFailed    3m25s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (1854) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-e1f22eee-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    3m23s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (2070) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-e3536fef-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    3m20s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (2286) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-e518ffab-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    3m16s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (2502) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-e7727d5b-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    3m10s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (2718) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-eb2d00fc-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    3m                    rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (2934) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-f12f1b7c-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    2m42s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (3150) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-fbdfb031-e1c2-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Warning    ProvisioningFailed    2m8s                  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (3366) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-101b8267-e1c3-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Normal     Provisioning          64s (x9 over 3m27s)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  External provisioner is provisioning volume for claim "smartretail/smartretail"
  Warning    ProvisioningFailed    62s                   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-7d4984d44b-wftls_6a814093-e0f6-11e9-8632-1ac0ecf20b5a  failed to provision volume with StorageClass "smartretail": rpc error: code = Internal desc = an error occurred while running (3582) ceph [fs subvolume create comteck-rookceph-filesystem-replica3 csi-vol-375eb7b1-e1c3-11e9-b25e-1ac0ecf20b5a 1073741824 --group_name csi --mode 777 -m 10.97.196.113:6789,10.106.223.63:6789,10.110.199.65:6789 -c /etc/ceph/ceph.conf -n client.admin --keyfile=***stripped*** --pool_layout smartretail]: exit status 2: Error ENOENT: Subvolume group 'csi' not found, create it with `ceph fs subvolumegroup create` before creating subvolumes
  Normal     ExternalProvisioning  12s (x14 over 3m27s)  persistentvolume-controller 

@Madhu-1
Copy link
Member

Madhu-1 commented Sep 28, 2019

Can you restart cephfs provision pods and try?

@dingsongjie
Copy link
Author

Can you restart cephfs provision pods and try?

I tried ,it goes back to the previous error

@ajarr
Copy link
Contributor

ajarr commented Oct 9, 2019

@dingsongjie, what are your Ceph filesystem's data pools? Check with

$ ceph fs ls

You can issue the above command from the ceph toolbox pod/container

If the pool you set here,
https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml#L15

is not a CephFS data pool, then you hit the error you see

@zhangxpp
Copy link

zhangxpp commented Nov 8, 2019

for your reference, I've successfully done it.

ceph 14.2.4
rook-ceph: v1.1 https://rook.github.io/docs/rook/v1.1/ceph-quickstart.html

I used the yamls as attached. And I encountered another error as below and fixed that by deleting and recreating all of csi-cephfsplugin-provisioner :
lock is held by csi-cephfsplugin-provisioner-75c965db4f-6zzbt and has not yet expired.

storageclass.txt
kube-registry.txt
filesystem.txt

@ajarr
Copy link
Contributor

ajarr commented Nov 8, 2019

for your reference, I've successfully done it.

ceph 14.2.4
rook-ceph: v1.1 https://rook.github.io/docs/rook/v1.1/ceph-quickstart.html

I used the yamls as attached. And I encountered another error as below and fixed that by deleting and recreating all of csi-cephfsplugin-provisioner :
lock is held by csi-cephfsplugin-provisioner-75c965db4f-6zzbt and has not yet expired.

@zhangxpp so you're able to create CephFS PVCs and you're not facing any issues?

storageclass.txt
kube-registry.txt
filesystem.txt

@zhangxpp
Copy link

zhangxpp commented Nov 8, 2019

Yes. correct. @ajarr
kubernetes1.6.2.

@tuapuikia
Copy link

Restart the cephfs provisioner should solve your problem.
kubectl delete pod -l app=csi-cephfsplugin-provisioner --grace-period=0 --force

@Antiarchitect
Copy link

Antiarchitect commented Nov 26, 2019

Having the same issue. CentOS 7, Linux Kernel 5.3.8-1, K8s 1.14.8, Rook 1.1.7 - official chart. Ceph 14.2.4-20191112 InvalidValue: [Errno 22] error in setxattr
Restarting provisioners help from this error, but pvc is still pending exactly as here: https://serverfault.com/questions/991624/mountvolume-mountdevice-failed-operation-with-the-given-volume-id-already-exists

@kringalf
Copy link

it looked like this for me too: after checking

ceph fs ls

as suggested by @ajarr it turned out the pool name in the StorageClass was wrong, after adjusting that and restarting the provisioner it worked for me

@Antiarchitect
Copy link

Antiarchitect commented Dec 1, 2019

I have the same values of pool:

ceph fs ls
name: core-rook, metadata pool: core-rook-metadata, data pools: [core-rook-data0 ]

(I don't know why there is extra space inside of data pools array)

kubectl get sc core-rook-cephfs -oyaml | fgrep pool:
  pool: core-rook-data0

And still get this errors

@Madhu-1 Madhu-1 added the csi label Jan 23, 2020
@stale
Copy link

stale bot commented Apr 23, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Apr 23, 2020
@stale
Copy link

stale bot commented Apr 30, 2020

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@tomassrnka
Copy link

I ran into similar problem:

$ kubectl describe pvc cephfs-pvc -n kube-system
Name:          cephfs-pvc
Namespace:     kube-system
StorageClass:  rook-cephfs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    kube-registry-58659ff99b-g6jdt
               kube-registry-58659ff99b-ktbjf
               kube-registry-58659ff99b-mp4kw
Events:
  Type     Reason                Age                From                                                                                                             Message
  ----     ------                ----               ----                                                                                                             -------
  Warning  ProvisioningFailed    20s                rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = an error occurred while running (41213) ceph [fs subvolume create myfs csi-vol-d877ec63-8d10-11ea-a630-561cf479fcc5 1073741824 --group_name csi --mode 777 -m 10.101.205.143:6789,10.97.157.103:6789,10.105.158.248:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped*** --pool_layout myfs-data0]: exit status 2: Error ENOENT: subvolume group 'csi' does not exist
  Warning  ProvisioningFailed    17s                rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = an error occurred while running (41441) ceph [fs subvolume create myfs csi-vol-da0ae8e7-8d10-11ea-a630-561cf479fcc5 1073741824 --group_name csi --mode 777 -m 10.101.205.143:6789,10.97.157.103:6789,10.105.158.248:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped*** --pool_layout myfs-data0]: exit status 2: Error ENOENT: subvolume group 'csi' does not exist
  Warning  ProvisioningFailed    14s                rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = an error occurred while running (41669) ceph [fs subvolume create myfs csi-vol-dbd90308-8d10-11ea-a630-561cf479fcc5 1073741824 --group_name csi --mode 777 -m 10.101.205.143:6789,10.97.157.103:6789,10.105.158.248:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped*** --pool_layout myfs-data0]: exit status 2: Error ENOENT: subvolume group 'csi' does not exist
  Normal   ExternalProvisioning  12s (x3 over 22s)  persistentvolume-controller                                                                                      waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator
  Warning  ProvisioningFailed    10s                rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = an error occurred while running (41897) ceph [fs subvolume create myfs csi-vol-de324f8d-8d10-11ea-a630-561cf479fcc5 1073741824 --group_name csi --mode 777 -m 10.101.205.143:6789,10.97.157.103:6789,10.105.158.248:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped*** --pool_layout myfs-data0]: exit status 2: Error ENOENT: subvolume group 'csi' does not exist
  Normal   Provisioning          6s (x5 over 22s)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"
  Warning  ProvisioningFailed    4s                 rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-95ccff866-7x9zm_935a4d39-b76c-47c6-a1a8-796e9902d3d4  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = an error occurred while running (42125) ceph [fs subvolume create myfs csi-vol-e1d218da-8d10-11ea-a630-561cf479fcc5 1073741824 --group_name csi --mode 777 -m 10.101.205.143:6789,10.97.157.103:6789,10.105.158.248:6789 -c /etc/ceph/ceph.conf -n client.csi-cephfs-provisioner --keyfile=***stripped*** --pool_layout myfs-data0]: exit status 2: Error ENOENT: subvolume group 'csi' does not exist

Fixed by createing csi subvolume group manually via toolbox:

$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[root@rook-ceph-tools-59c5486bc5-n92jn /]# ceph fs volume ls
[
    {
        "name": "myfs"
    }
]
[root@rook-ceph-tools-59c5486bc5-n92jn /]# ceph fs subvolumegroup create myfs csi
[root@rook-ceph-tools-59c5486bc5-n92jn /]# exit

@narphu
Copy link

narphu commented Oct 1, 2020

I have the same values of pool:

ceph fs ls
name: core-rook, metadata pool: core-rook-metadata, data pools: [core-rook-data0 ]

(I don't know why there is extra space inside of data pools array)

kubectl get sc core-rook-cephfs -oyaml | fgrep pool:
  pool: core-rook-data0

And still get this errors
Might wanna cross-check the filesystem name with your storageclass file. That was my problem.

@rory-ye-nv
Copy link

run into this issue with rook v.16.0

@Addyvan
Copy link

Addyvan commented Sep 22, 2021

Ran into this issue with rook v1.7.3. Fix for me, as mentioned above by @tomassrnka, was to manually create the subvolumegroup using:

kubectl exec -n rook-ceph -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash
$ ceph fs subvolumegroup create myfs csi

@degorenko
Copy link
Contributor

Also ran into this issue with rook v1.7.6. Fixed by restarting cephfs-csi-provisioner pods.

@prazumovsky
Copy link

prazumovsky commented Dec 29, 2022

rook v1.8.9, ceph v15.2.13 - still observing this issue: why is it closed?

@travisn
Copy link
Member

travisn commented Jan 3, 2023

There are a number of reasons the PVCs can be in pending state, most commonly when the CSI driver is not properly running, or Ceph is not healthy or configured properly. See the CSI troubleshooting guide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests