-
Notifications
You must be signed in to change notification settings - Fork 198
provision volume failed with kubernetes v1.9.0 #502
Comments
@svasseur |
yes i can ping my vsphere server from the master node
|
@svasseur I see some problem in the deployment yaml with statically created volume using vmkfstools. Can you use following volume path ?
404 is different issue we still need to debug. In the above config Also check out /etc/kubernetes/vsphere.conf file and make sure all vCenter specific parameters are set correctly. |
sorry to not put all parameter it's because it's working in v1.6.5
and i see the volume create with vmkfstools ( wrong copy/paste ) /etc/kubernetes/vsphere.conf
and the problem happen also with the stateful storage ( same yaml in 1.6.5 work) |
Hi everybody, kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: medhub-sc-default
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
fstype: ext3
datastore: Shared Storages/pcc-006537 Then i try to create a PVC using this storage class: kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc001
annotations:
volume.beta.kubernetes.io/storage-class: medhub-sc-default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi At the start the error was the following: I decided to restart all the nodes and then the error changed to the following: Datastores are of course shared, so I do not understand this error. Thanks for the help! |
@maximematheron we are working on the fix. You can track this issue - vmware-archive/kubernetes-archived#436 Temporarily if you want to unblock your self from trying out 1.9.0, you can move datastore pcc-006537 to the root storage folder (/).
Regarding this issue we have the fix merged in to 1.9 branch - kubernetes/kubernetes#58124 |
Thanks for the response @divyenpatel ! I just specified the Datastore without the folder name and it worked. Basically, my SC yaml is the following: kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: medhub-sc-default
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
fstype: ext3
datastore: pcc-006537 It is really weird that with the v1.6.5 I had to specify the folder. What is really weird also is that for both versions, I need to restart all the nodes at least one time to make it successful (to create PVC). Do you know why it is happening? Maxime |
@maximematheron thank you for sharing workaround. I was thinking other way, but your suggestion to change yaml looks good. I have also tried this out end to end (with provisioning pod using PVC) for shared datastore located in the datastore cluster with just fixing storage class. Everything worked fine.
This is happening because in 1.9 we are checking datastore name, which user has specified in the storage class against the shared datastores we have queried from vCenter. Comparing just names. See https://github.com/kubernetes/kubernetes/blob/release-1.9/pkg/cloudprovider/providers/vsphere/vsphere.go#L1064
With this fix - kubernetes/kubernetes#58124 we are making sure vsphere connections are getting renewed if they are timed out, so from 1.9.2 you will not require to restart nodes. |
Thanks @divyenpatel ! So you think I should use the I have built from your commit the kubectl (my cluster is still built with the v1.9.0 release) and the error is still the same: i have to restart my cluster all the time. Actually, the same error exists for the v1.6.5. I can Create PVC but then Pods cannot see them. I have to restart again and again the cluster. Should i create the cluster with the kubectl of your commit? Maxime |
@maximematheron Here is the official community support (slack and email alias) - https://vmware.github.io/hatchway/#support |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
when i add a volume with vsphere-volume, always an error "AttachVolume.Attach failed for volume "xxxx" : 404 Not Found"
it's work like a charm in v1.6.5
i try to create a vmdk volume
vmkfstools -c 10G /vmfs/volumes/Datastore-12/volumes/kubeVolume.vmdk
and after
kubectl describe pod :
I also try with stateful storage
kubectl describe pvc
The text was updated successfully, but these errors were encountered: