Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for deploying to CentOS #119

Closed
zimmertr opened this issue Mar 2, 2020 · 29 comments · Fixed by #149
Closed

Support for deploying to CentOS #119

zimmertr opened this issue Mar 2, 2020 · 29 comments · Fixed by #149
Assignees
Labels
Need community involvement Needs community involvement on some action item.

Comments

@zimmertr
Copy link

zimmertr commented Mar 2, 2020

Description

The zfs-Operator manifest is incompatible with ZFS. After deployment some hostPath references to libraries fail.

Warning  FailedMount  20m (x5 over 20m)    kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "libzpool" : hostPath type check failed: /lib/libzpool.so.2.0.0 is not a file
  Warning  FailedMount  20m (x5 over 20m)    kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "libnvpair" : hostPath type check failed: /lib/libnvpair.so.1.0.1 is not a file

Context

I use CentOS/RHEL in production not Ubuntu.

Possible Solution

This manifest with modified libraries was sent to me via Slack and is deploying successfully.

@zimmertr
Copy link
Author

zimmertr commented Mar 2, 2020

This is coming up now but not quite working yet.

ZFS Is running on my CentOS host:

$> zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
FlashPool    190K   449G       24K  /FlashPool
SaturnPool  11.7T   624G     11.6T  /SaturnPool

And a StorageClass is set up for the SaturnPool storage pool.

$> kubectl get sc -oyaml saturnpool
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"saturnpool"},"parameters":{"poolname":"saturnpool","recordsize":"4k"},"provisioner":"zfs.csi.openebs.io"}
  creationTimestamp: "2020-03-02T03:29:06Z"
  name: saturnpool
  resourceVersion: "53651"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/saturnpool
  uid: 4ba42ae0-891e-470d-bfab-d686d1573f8c
parameters:
  poolname: saturnpool
  recordsize: 4k
provisioner: zfs.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: Immediate

However, after creating a PVC and a Pod like so:

$> kubectl apply -f ~/Desktop/test.yml
persistentvolumeclaim/test-pvc created
pod/fio created

$> cat ~/Desktop/test.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
spec:
  storageClassName: saturnpool
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: fio
spec:
  restartPolicy: Never
  containers:
  - name: perfrunner
    image: openebs/tests-fio
    command: ["/bin/bash"]
    args: ["-c", "while true ;do sleep 50; done"]
    volumeMounts:
       - mountPath: /datadir
         name: test-pvc
    tty: true
  volumes:
  - name: test-pvc
    persistentVolumeClaim:
      claimName: test-pvc

The openebs-zfs-node-operator drops the following into the STDOUT logs:

time="2020-03-02T05:21:30Z" level=info msg="Got add event for ZV saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11"
time="2020-03-02T05:21:30Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:30.915120       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:30Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:30.927864       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:30Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:30.942064       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:30Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:30.964689       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:31Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:31.007099       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:31Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:31.090454       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:31Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:31.253489       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:31Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:31.576118       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:32Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:32.219159       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:32Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:32Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:32Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:33Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:33Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:33Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:33Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:33Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:33.501870       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:33Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:33Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:33Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:33Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:33Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:33Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:34Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:34Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:34Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:34Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:34Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:34Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:34Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:34Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:34Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:36Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:36.065322       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:36Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:36Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:36Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:36Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:36Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:36Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:36Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:36Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:36Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:36Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:40Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:40Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:40Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:40Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:40Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:40Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:40Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:40Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:40Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:40Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:41Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:41.188146       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:21:48Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:48Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:48Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:48Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:21:48Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:21:48Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:21:48Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:21:48Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:21:48Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:21:48Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"
time="2020-03-02T05:21:51Z" level=error msg="zfs: could not create volume saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 cmd [create -V 4294967296 saturnpool/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11] error: zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"
E0302 05:21:51.430678       1 volume.go:252] error syncing 'openebs/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11': exit status 127, requeuing
time="2020-03-02T05:22:05Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:22:05Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:22:05Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:22:05Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-02T05:22:05Z" level=info msg="GRPC request: {}"
time="2020-03-02T05:22:05Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}}]}"
time="2020-03-02T05:22:05Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-02T05:22:05Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/9b614b76-1aaa-4a06-a077-5cc552f72cef/volumes/kubernetes.io~csi/pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1583126028849-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11\"}"
time="2020-03-02T05:22:05Z" level=error msg="can not get device for volume:pvc-36b2a862-2ace-49bf-a6d5-b5fc5b345c11 dev  err: lstat /dev/zvol: no such file or directory"
time="2020-03-02T05:22:05Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zvol can not be mounted"

Which indicates to me that it is still encountering some errors:

zfs: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory\n"

@zimmertr
Copy link
Author

zimmertr commented Mar 2, 2020

[root@saturn ~]# ldd `which zfs`
	linux-vdso.so.1 =>  (0x00007ffe19764000)
	libnvpair.so.1 => /lib64/libnvpair.so.1 (0x00007ff2e7a17000)
	libuutil.so.1 => /lib64/libuutil.so.1 (0x00007ff2e7808000)
	libzfs.so.2 => /lib64/libzfs.so.2 (0x00007ff2e758d000)
	libzfs_core.so.1 => /lib64/libzfs_core.so.1 (0x00007ff2e7387000)
	libblkid.so.1 => /lib64/libblkid.so.1 (0x00007ff2e7147000)
	libm.so.6 => /lib64/libm.so.6 (0x00007ff2e6e45000)
	libssl.so.10 => /lib64/libssl.so.10 (0x00007ff2e6bd3000)
	libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007ff2e6770000)
	librt.so.1 => /lib64/librt.so.1 (0x00007ff2e6568000)
	libudev.so.1 => /lib64/libudev.so.1 (0x00007ff2e6352000)
	libuuid.so.1 => /lib64/libuuid.so.1 (0x00007ff2e614d000)
	libz.so.1 => /lib64/libz.so.1 (0x00007ff2e5f37000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff2e5d1b000)
	libc.so.6 => /lib64/libc.so.6 (0x00007ff2e594d000)
	/lib64/ld-linux-x86-64.so.2 (0x00007ff2e7c2e000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007ff2e5737000)
	libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007ff2e54ea000)
	libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007ff2e5201000)
	libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007ff2e4ffd000)
	libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007ff2e4dca000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007ff2e4bc6000)
	libcap.so.2 => /lib64/libcap.so.2 (0x00007ff2e49c1000)
	libdw.so.1 => /lib64/libdw.so.1 (0x00007ff2e4770000)
	libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007ff2e4560000)
	libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007ff2e435c000)
	libresolv.so.2 => /lib64/libresolv.so.2 (0x00007ff2e4143000)
	libattr.so.1 => /lib64/libattr.so.1 (0x00007ff2e3f3e000)
	libelf.so.1 => /lib64/libelf.so.1 (0x00007ff2e3d26000)
	liblzma.so.5 => /lib64/liblzma.so.5 (0x00007ff2e3b00000)
	libbz2.so.1 => /lib64/libbz2.so.1 (0x00007ff2e38f0000)
	libselinux.so.1 => /lib64/libselinux.so.1 (0x00007ff2e36c9000)
	libpcre.so.1 => /lib64/libpcre.so.1 (0x00007ff2e3467000)

@zimmertr
Copy link
Author

zimmertr commented Mar 2, 2020

Centos Version:

[root@saturn ~]# cat /etc/*release*
CentOS Linux release 7.7.1908 (Core)
Derived from Red Hat Enterprise Linux 7.7 (Source)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.7.1908 (Core)
CentOS Linux release 7.7.1908 (Core)
cpe:/o:centos:centos:7

Linux Version:

[root@saturn ~]# uname -a
Linux saturn.sol.milkyway 3.10.0-1062.12.1.el7.x86_64 openebs/openebs#1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

ZFS Version:

[root@saturn ~]# zfs --version
zfs-0.8.3-1
zfs-kmod-0.8.3-1

@pawanpraka1
Copy link
Contributor

I have added this to our Roadmap (https://github.com/orgs/openebs/projects/10). Will work on it to add the support for centOS in next release (v0.6). However I need one help from you if you can, I tried to install ZFS 0.8 on our in-house CentOS7.7 machine by following https://github.com/openzfs/zfs/wiki/RHEL-and-CentOS, but I could not succeed. Could you help us with the steps you used to install it on your machine.

@zimmertr
Copy link
Author

zimmertr commented Mar 7, 2020

@pawanpraka1
Copy link
Contributor

pawanpraka1 commented Mar 12, 2020

See if this modified yaml with local image is working for you

https://raw.githubusercontent.com/pawanpraka1/zfs-localpv/centos/deploy/zfs-operator.yaml

This yaml is using image from my docker hub pawanpraka1/zfs-driver:centos

I have changed the docker file(buildscripts/zfs-driver/Dockerfile) and compiled and push the image to my own docker hub

-FROM ubuntu:18.04
-RUN apt-get clean && rm -rf /var/lib/apt/lists/*
-RUN apt-get update; exit 0
-RUN apt-get -y install rsyslog libssl-dev xfsprogs ca-certificates
+FROM centos:7
+RUN yum -y install rsyslog libssl-dev xfsprogs ca-certificates

@zimmertr
Copy link
Author

zimmertr commented Mar 13, 2020

Hi, thank you for the quick turnaround! Everything provisions as expected and I can confirm that the ZFS Volumes are created on my Pool. However, OpenEBS appears to fail to format the filesystem on the volume. These logs are produced:

Events:
  Type     Reason                  Age                From                            Message
  ----     ------                  ----               ----                            -------
  Normal   Scheduled               64s                default-scheduler               Successfully assigned default/fio to saturn.sol.milkyway
  Normal   SuccessfulAttachVolume  64s                attachdetach-controller         AttachVolume.Attach succeeded for volume "pvc-475beb43-0a2d-4288-b7d6-b66958140b5f"
  Warning  FailedMount             14s (x7 over 48s)  kubelet, localhost.localdomain  MountVolume.SetUp failed for volume "pvc-475beb43-0a2d-4288-b7d6-b66958140b5f" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = rpc error: code = Internal desc = not able to format and mount the zvol
E0313 03:45:44.098915       1 mount_linux.go:496] format of disk "/dev/zvol/FlashPool/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f" failed: type:("ext4") target:("/var/lib/kubelet/pods/43faf4e4-0c22-4fc6-8328-af429de01c76/volumes/kubernetes.io~csi/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f/mount") options:(["defaults"])error:(executable file not found in $PATH)
time="2020-03-13T03:45:44Z" level=error msg="zfspv: failed to mount volume /dev/zvol/FlashPool/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f [ext4] to /var/lib/kubelet/pods/43faf4e4-0c22-4fc6-8328-af429de01c76/volumes/kubernetes.io~csi/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f/mount, error executable file not found in $PATH"
time="2020-03-13T03:45:44Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = not able to format and mount the zvol"
time="2020-03-13T03:46:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-13T03:46:16Z" level=info msg="GRPC request: {}"
time="2020-03-13T03:46:16Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}}]}"
time="2020-03-13T03:46:16Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities"
time="2020-03-13T03:46:16Z" level=info msg="GRPC request: {}"
time="2020-03-13T03:46:16Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":2}}},{\"Type\":{\"Rpc\":{\"type\":3}}}]}"
time="2020-03-13T03:46:16Z" level=info msg="GRPC call: /csi.v1.Node/NodePublishVolume"
time="2020-03-13T03:46:16Z" level=info msg="GRPC request: {\"target_path\":\"/var/lib/kubelet/pods/43faf4e4-0c22-4fc6-8328-af429de01c76/volumes/kubernetes.io~csi/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f/mount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"storage.kubernetes.io/csiProvisionerIdentity\":\"1584071025176-8081-zfs.csi.openebs.io\"},\"volume_id\":\"pvc-475beb43-0a2d-4288-b7d6-b66958140b5f\"}"
E0313 03:46:16.186256       1 mount_linux.go:147] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/zvol/FlashPool/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f /var/lib/kubelet/pods/43faf4e4-0c22-4fc6-8328-af429de01c76/volumes/kubernetes.io~csi/pvc-475beb43-0a2d-4288-b7d6-b66958140b5f/mount
Output: mount: wrong fs type, bad option, bad superblock on /dev/zd0,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

@pawanpraka1
Copy link
Contributor

Can you try this storage class. Modify the pool name accordingly.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

@zimmertr
Copy link
Author

Hi, looks like things are working! Thank you for your help.

@pawanpraka1 pawanpraka1 reopened this Apr 3, 2020
@pawanpraka1
Copy link
Contributor

Reopening it, will close it once we have changes in production.

@pawanpraka1 pawanpraka1 transferred this issue from openebs/openebs May 18, 2020
@jlcox1970
Copy link

jlcox1970 commented May 21, 2020

I am also facing issue on CentOS 8 nodes with this driver.
I see the following int the openebs-zfs-plugin container logs

time="2020-05-21T23:39:27Z" level=error msg="zfs: could not set mountpoint on dataset main/pvc-b27284ae-3033-49e3-9c18-89f0c4895fcb cmd [set mountpoint=/var/lib/kubelet/pods/e1d092c2-7120-4dd8-a0de-4b523f19b972/volumes/kubernetes.io~csi/pvc-b27284ae-3033-49e3-9c18-89f0c4895fcb/mount main/pvc-b27284ae-3033-49e3-9c18-89f0c4895fcb] error: zfs: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory\n"
22/05/2020 09:39:27 

@zimmertr
Copy link
Author

Please use the manifests and image described in this comment by @pawanpraka1 until this has been resolved upstream.

@jlcox1970
Copy link

I have used that image and the also the SC yaml that is also mentioned.
This got me to the point of no other errors in the logs apart from the above.

@jlcox1970
Copy link

finally worked this out, the image is CentOS7 based...
it has libc-2.17 and all the libs that go with that
however, I am on CentOS8 and that a libc-2.28

I can update the docker image but I need to know how to run the make file to get the zfs-driver

alternatively can I just copy that out of the existing container and into the the new one ?

@pawanpraka1
Copy link
Contributor

@jlcox1970 sorry, we couldn't prioritize this in to our release. Clone this repo https://github.com/pawanpraka1/zfs-localpv and go to the CentOS branch. then Just change the image to centos8 in the docker file https://github.com/pawanpraka1/zfs-localpv/blob/centos/buildscripts/zfs-driver/Dockerfile. To create the image, you just have to do make to get the image.

@jlcox1970
Copy link

@pawanpraka1 thats ok and i am happy to help as this is my cluster at home :)

as for the make, I have never used go before and so I have a whole lot of errors that come up
--> Running go fmt can't load package: package _/root/git/zfs-localpv/cmd: cannot find package "_/root/git/zfs-localpv/cmd" in any of: /usr/lib/golang/src/_/root/git/zfs-localpv/cmd (from $GOROOT) /root/go/src/_/root/git/zfs-localpv/cmd (from $GOPATH) can't load package: package _/root/git/zfs-localpv/pkg/apis/openebs.io/core/v1alpha1: cannot find package "_/root/git/zfs-localpv/pkg/apis/openebs.io/core/v1alpha1" in any of: /usr/lib/golang/src/_/root/git/zfs-localpv/pkg/apis/openebs.io/core/v1alpha1 (from $GOROOT)
can you give me some points on set of a GO environment ?
thanks

@pawanpraka1 pawanpraka1 added the Need community involvement Needs community involvement on some action item. label May 22, 2020
@pawanpraka1
Copy link
Contributor

pawanpraka1 commented May 22, 2020

@jlcox1970 here is the developer setup doc : https://github.com/openebs/zfs-localpv/blob/master/docs/developer-setup.md. You can follow this to setup go : https://www.tecmint.com/install-go-in-linux/ (use golang version 1.12.5).

I have pushed the centos8 image also and updated the operator file with that. You can try this yaml which is using centos8 image : https://raw.githubusercontent.com/pawanpraka1/zfs-localpv/centos/deploy/zfs-operator.yaml

@jlcox1970
Copy link

@pawanpraka1 that fixed it nicely :)
I will look into the dev environment setup this weekend and make it all containerised so that I dont have to revisit it :)

@pawanpraka1
Copy link
Contributor

Sure @jlcox1970 . Just to confirm, you are able to provision the volume using the latest yaml?

@pawanpraka1
Copy link
Contributor

pawanpraka1 commented May 22, 2020

can you also provide the below details :

  1. zfs version
  2. ldd which zfs
  3. cat /etc/release

@jlcox1970
Copy link

yes the volume is provisioned
zfs-0.8.4-1.el8.x86_64

linux-vdso.so.1 (0x00007fff74dcb000)
libnvpair.so.1 => /lib/libnvpair.so.1 (0x00007f7523620000)
libuutil.so.1 => /lib/libuutil.so.1 (0x00007f7523410000)
libzfs.so.2 => /lib/libzfs.so.2 (0x00007f7523190000)
libzfs_core.so.1 => /lib/libzfs_core.so.1 (0x00007f7522f88000)
libblkid.so.1 => /lib64/libblkid.so.1 (0x00007f7522d30000)
libm.so.6 => /lib64/libm.so.6 (0x00007f75229a8000)
libssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007f7522710000)
libcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f7522230000)
librt.so.1 => /lib64/librt.so.1 (0x00007f7522020000)
libtirpc.so.3 => /lib64/libtirpc.so.3 (0x00007f7521de8000)
libudev.so.1 => /lib64/libudev.so.1 (0x00007f7521bc0000)
libuuid.so.1 => /lib64/libuuid.so.1 (0x00007f75219b8000)
libz.so.1 => /lib64/libz.so.1 (0x00007f75217a0000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7521580000)
libc.so.6 => /lib64/libc.so.6 (0x00007f75211b8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7523a60000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f7520fa0000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f7520d98000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f7520b48000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f7520858000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f7520638000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f7520430000)
libmount.so.1 => /lib64/libmount.so.1 (0x00007f75201d0000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f751ffb8000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f751fdb0000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f751fb98000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f751f968000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f751f6e0000)

CentOS Linux release 8.1.1911 (Core)

@pawanpraka1 pawanpraka1 moved this from Near term goals to In progress in ZFS Local PV May 22, 2020
@pawanpraka1
Copy link
Contributor

@zimmertr, @jlcox1970 I am working on a change(pawanpraka1@6c567bd) which can work for both ubuntu and centos. I have tested it on ubuntu, and it seems to be working. I need one help/favor from you guys, if you can test this on centos7 and centos8 and if it is working then I will upstream this change.

https://raw.githubusercontent.com/pawanpraka1/zfs-localpv/lib/deploy/zfs-operator.yaml

@zimmertr
Copy link
Author

zimmertr commented May 27, 2020

I have actually moved away from CentOS 7 to Flatcar Linux since making this post and do not have a server available to test anymore. If @jlcox1970 is unable to as well, I could probably spin something up in a VM though. It just won't resemble a production environment any longer.

I haven't yet tested this with Flatcar. Is that distro supported?

@pawanpraka1
Copy link
Contributor

@zimmertr thanks. Flatcar will also be supported if the libraries required for zfs is in /lib or /lib64 and binary is installed at /sbin/zfs or /usr/sbin/zfs. If that is the case then same yaml will work, otherwise we have to make little modification to the yaml and it should just work.
We can check 2 things to confirm that

  1. which zfs <== to check the binary path
  2. ldd which zfs <== to check the libraries

@jlcox1970
Copy link

jlcox1970 commented Jun 1, 2020 via email

@pawanpraka1
Copy link
Contributor

@jlcox1970 thanks for helping me with the testing. It seems like there is more work needs to be done in this. I should probably restrict the host libraries to zfs binary only. Currently the whole pod is exposed to the host libraries and that is probably causing the issue. Let me fix that. Thanks again @jlcox1970 .

@pawanpraka1
Copy link
Contributor

pawanpraka1 commented Jun 5, 2020

@jlcox1970, sorry for the delay, Please see if the below yaml works for you

for CentOS7 =>

https://raw.githubusercontent.com/pawanpraka1/zfs-localpv/centos/deploy/operators/centos7/zfs-operator.yaml

CentOS8 =>

https://raw.githubusercontent.com/pawanpraka1/zfs-localpv/centos/deploy/operators/centos8/zfs-operator.yaml

@kmova kmova closed this as completed in #149 Jun 8, 2020
ZFS Local PV automation moved this from In progress to Done Jun 8, 2020
@pawanpraka1
Copy link
Contributor

pawanpraka1 commented Sep 9, 2020

We have merged a PR #204 to make the ZFS operator yaml not agnostic to the underlying Operation system. In 1.0 and onward releases we can use the same Operator yaml (https://github.com/openebs/zfs-localpv/blob/master/deploy/zfs-operator.yaml) of all the Operating systems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Need community involvement Needs community involvement on some action item.
Projects
ZFS Local PV
  
Done
3 participants