You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 11, 2023. It is now read-only.
Rancher OS amd64 4.14.32-rancher2
RancherOS v1.4.0
VMWare
Sometimes NFS volumes mount perfectly fine, other times...
MountVolume.SetUp failed for volume “pvc-dba9311c-a7c4-11e8-b39a-00505685234f” : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs la-6pnasvmnfs02.internal.ieeeglobalspec.com:/k8s_vols/qt_cluster_max_test/zabbix-zabbix-4-web-nginx-nfs-pvc-dba9311c-a7c4-11e8-b39a-00505685234f /opt/rke/var/lib/kubelet/pods/46471c56-a7cc-11e8-b39a-00505685234f/volumes/kubernetes.io~nfs/pvc-dba9311c-a7c4-11e8-b39a-00505685234f Output: mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use ‘-o nolock’ to keep locks local, or start statd.
If I set a mount option of nolock on the volume:
MountVolume.SetUp failed for volume "pvc-b9b65f81-a7cf-11e8-b39a-00505685234f" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o nolock la-6pnasvmnfs02.internal.ieeeglobalspec.com:/k8s_vols/qt_cluster_max_test/zabbix-zabbix-4-nginx1-nfs-pvc-b9b65f81-a7cf-11e8-b39a-00505685234f /opt/rke/var/lib/kubelet/pods/a9d7d0ad-a7d5-11e8-b39a-00505685234f/volumes/kubernetes.io~nfs/pvc-b9b65f81-a7cf-11e8-b39a-00505685234f Output: mount.nfs: access denied by server while mounting la-6pnasvmnfs02.internal.ieeeglobalspec.com:/k8s_vols/qt_cluster_max_test/zabbix-zabbix-4-nginx1-nfs-pvc-b9b65f81-a7cf-11e8-b39a-00505685234f
This is after rebooting the worker node as well. Two other pods with PVC's on the same NFS export are working well.
I was able to successfully bring up the PVC (read/write many) on the pod once. I then tried to scale out and add a node, then got the above error on the second pod. After trying to scale back, it hung on deleting the pod - I had to manually kill it. I then rebooted the worker node and got the first error immediately upon restart.
Doing a mount and grepping for the volume shows nothing mounted.
Trying to mount it from the command line fails with permission denied or
[rancher@la-1tkube-w2 docker]$ sudo mount -t nfs -o nolock la-6pnasvmnfs02.internal.ieeeglobalspec.com:/k8s_vols/qt_cluster_max_test/zabbix-zabbix-4-nginx1-nfs-pvc-b9b65f81-a7cf-11e8-b39a-00505685234f /opt/rke/var/lib/kubelet/pods/a9d7d0ad-a7d5-11e8-b39a-00505685234f/volumes/kubernetes.io~nfs/pvc-b9b65f81-a7cf-11e8-b39a-00505685234f
mount: mounting la-6pnasvmnfs02.internal.ieeeglobalspec.com:/k8s_vols/qt_cluster_max_test/zabbix-zabbix-4-nginx1-nfs-pvc-b9b65f81-a7cf-11e8-b39a-00505685234f on /opt/rke/var/lib/kubelet/pods/a9d7d0ad-a7d5-11e8-b39a-00505685234f/volumes/kubernetes.io~nfs/pvc-b9b65f81-a7cf-11e8-b39a-00505685234f failed: No such file or directory
Any ideas? This doesn't seem like a usable option for persistent storage right now.
The text was updated successfully, but these errors were encountered:
I launch RancherOS-1.4.0 from AWS and use Rancher2.0.8 add cluster .
Use nfs as the storage, create a statefulset or deployment, and it works fine.
Then, reboot the node, the pod status is error and the above error message appears. I tried to scale out and didn't succeed, but I can successfully create a new statefulset or deployment.
I did the same thing on the ubuntu system, and the reboot won't have any effect.
Rancher OS amd64 4.14.32-rancher2
RancherOS v1.4.0
VMWare
Sometimes NFS volumes mount perfectly fine, other times...
If I set a mount option of nolock on the volume:
This is after rebooting the worker node as well. Two other pods with PVC's on the same NFS export are working well.
I was able to successfully bring up the PVC (read/write many) on the pod once. I then tried to scale out and add a node, then got the above error on the second pod. After trying to scale back, it hung on deleting the pod - I had to manually kill it. I then rebooted the worker node and got the first error immediately upon restart.
Doing a mount and grepping for the volume shows nothing mounted.
Trying to mount it from the command line fails with permission denied or
Any ideas? This doesn't seem like a usable option for persistent storage right now.
The text was updated successfully, but these errors were encountered: