New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for out-of-tree vSphere Cloud Provider Interface (CPI) and Cloud Storage Interface (CSI) #23357
Comments
As part of the move to the cloud controller manager could rancher be extended to run cloud controller manager as part of its own deployment in tree. All k8's clusters would then talk to rancher, rancher would then proxy the requests to the cloud provider. This would help with clusters that don't have direct connectivity to the cloud provider. It would also make cluster deployment simpler as rancher can auto configure the cluster to point to itself and use stored cloud credential to proxy any requests. |
Our customer would find the following features useful, and it appears they are not available in the in tree providers:
I couldn't pin down a date or version for the CPI, but it looks like both will phase out around k8s v1.21? This may not be tomorrow, but the feeling is that we'll blink and it'll be around the corner. It would be good to have an idea when support for this will be in Rancher so customers can have some confidence that they'll have time to shake it down in the field before it's required. |
@cloudnautique Looks like the (other) customer we spoke of is on Openstack and needs CPI and CSI as well. |
This could be useful for those looking for a community contributed Helm chart to deploy vSphere CSI/CPI - https://github.com/stefanvangastel/vsphere-cpi-csi-helm |
Set milestone, assigned, etc as per Denise. |
I just wanted to make sure it wasn't lost that I was told to use this issue to note the other providers that are needed, rather than new issues. It is for that reason that I noted OpenStack CPI and CSI here. @cloudnautique please let me know if this changes and we need to open more issues. |
For anyone here wondering, we should have a migration path from in-tree: |
Migration can work using the steps from the doc linked above and the chart we're adding in rancher catalog. But due to an existing bug in vsphere CSI driver, it will only work for volumes provisioned using a certain cloud-config format, this issue explains the bug in detail: kubernetes-sigs/vsphere-csi-driver#628 Rancher issue to track it: #31105 |
@deniseschannon |
@mitchellmaler can you provide some details on what your workflow looks like from the perspective of creating the cluster and deploying the external cloud provider charts? Are you using fleet to deploy the cpi/csi apps? |
We use the rancher terraform provider to bring up the vsphere rancher provisioned cluster with the rke config set cloud provider to external which starts the clean cluster with all the nodes tainted. After that we deploy the vsphere cpi/csi first (using manifests or helm cli) so it would register and untaint the nodes. After that all other addons are done using apps v2 (fleet agent helm operation) since it can be scheduled. It would be great to be able to just deploy csi/cpi using the apps v2 as well but since the nodes are all tainted before they can be deployed it causes some workflow issues since fleet does not tolerate those. I haven't thought about using the old apps to deploy this which might solve the problem in our workflow as a temporary workaound if it can run tolerating the taints (will have to give this a try). A few others potential issues from the pr rancher/helm3-charts#53
|
I guess for 2 that depends on the OS you use such as CoreOS variant or RancherOS. Really the prefix path should be something that can be added to the hostpath values. |
rancher/rancher:v2.5.6-rc6 Tested fresh installs, migration, and volume expansion on a vSphere 7.0 environment: Fresh install:
Volume Expansion:
Migration:
Our QA vSphere 6.7 environment is not at 6.7U3 and as such does not support the out of tree cloud provider, so I was not able to validate the out of tree cloud provider on that environment, however I was able to see the issue mentioned above:
I have logged an issue for this here: #31550 |
@bmdepesa how will it deal with the host paths when RKE created nodes that don’t use the standard /var/lib/kubelet paths and have a prefix set? It is hard coded in the templates paths to use that host path https://github.com/rancher/helm3-charts/blob/429ac83cdb31a87be2d434ac463148cbe9988bc2/charts/vsphere-csi/v2.1.0/templates/vsphere-csi-node-ds.yaml#L122 Flatcar, CoreOS, RancherOS, etc. all use the /opt/rke/var/lib/kubelet paths instead of the standard paths. https://github.com/rancher/rke/blob/master/hosts/hosts.go#L60 The chart should have a way to provide a prefix to all the host paths for those OS types or just for users who set a prefix in their RKE config. |
Thanks @mitchellmaler I was able to confirm this behavior on RancherOS where the |
rancher/rancher:v2.5.6-rc7 We've moved the charts to the cluster explorer feature charts so they will be bundled in airgap installs, and mirrored all images. As part of the chart refactoring we exposed the prefix path to
|
What kind of request is this (question/bug/enhancement/feature request):
Feature request
Description:
There is a new out-of-tree vSphere Cloud Provider Interface (CPI), see https://cloud-provider-vsphere.sigs.k8s.io/ . In the k8s 1.20 timeframe, the in-tree cloud providers will be removed and everyone will need to use out-of-tree cloud providers.
The Rancher UI could be enhanced to allow users to select "vSphere" as a Cloud Provider option when creating a custom cluster. User could enter configuration information for vSphere, such as virtual center IP, port, username, password, network, etc. When deploying the cluster, Rancher would automatically create the ConfigMap, Service, DaemonSet, etc. needed for the vsphere-cloud-controller-manager workload.
Support for the vSphere Cloud Storage Interface (CSI) would be great to include as part of this feature request.
This issue may also be relevant: #20131
gz#6513
gz#9676
gz#12549
gz#12592
gz#14500
The text was updated successfully, but these errors were encountered: