-
Notifications
You must be signed in to change notification settings - Fork 14.9k
update cloud controller manager docs for v1.8 #5400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
# This is an example of how to setup cloud-controller-manger as a Daemonset in your cluster. | ||
# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master | ||
# Note that this Daemonset will not work straight out of the box for your cloud, this is | ||
# meant to be a guideline. | ||
|
||
--- | ||
apiVersion: v1 | ||
kind: ServiceAccount | ||
metadata: | ||
name: cloud-controller-manager | ||
namespace: kube-system | ||
--- | ||
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
metadata: | ||
name: system:cloud-controller-manager | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: | ||
- kind: ServiceAccount | ||
name: cloud-controller-manager | ||
namespace: kube-system | ||
--- | ||
apiVersion: extensions/v1beta1 | ||
kind: DaemonSet | ||
metadata: | ||
labels: | ||
k8s-app: cloud-controller-manager | ||
name: cloud-controller-manager | ||
namespace: kube-system | ||
spec: | ||
selector: | ||
matchLabels: | ||
k8s-app: cloud-controller-manager | ||
template: | ||
metadata: | ||
labels: | ||
k8s-app: cloud-controller-manager | ||
spec: | ||
serviceAccountName: cloud-controller-manager | ||
containers: | ||
- name: cloud-controller-manager | ||
# for in-tree providers we use gcr.io/google_containers/cloud-controller-manager | ||
# this can be replaced with any other image for out-of-tree providers | ||
image: gcr.io/google_containers/cloud-controller-manager:v1.8.0 | ||
command: | ||
- /usr/local/bin/cloud-controller-manager | ||
- --cloud-provider=<YOUR_CLOUD_PROVIDER> # Add your own cloud provider here! | ||
- --leader-elect=true | ||
- --use-service-account-credentials | ||
# these flags will vary for every cloud provider | ||
- --allocate-node-cidrs=true | ||
- --configure-cloud-routes=true | ||
- --cluster-cidr=172.17.0.0/16 | ||
tolerations: | ||
# this is required so CCM can bootstrap itself | ||
- key: node.cloudprovider.kubernetes.io/uninitialized | ||
value: "true" | ||
effect: NoSchedule | ||
# this is to have the daemonset runnable on master nodes | ||
# the taint may vary depending on your cluster setup | ||
- key: node-role.kubernetes.io/master | ||
effect: NoSchedule | ||
# this is to restrict CCM to only run on master nodes | ||
# the node selector may vary depending on your cluster setup | ||
nodeSelector: | ||
node-role.kubernetes.io/master: "" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
--- | ||
approvers: | ||
- luxas | ||
- thockin | ||
- wlan0 | ||
title: Developing Cloud Controller Manager | ||
--- | ||
|
||
**Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will | ||
be the preferred way to integrate Kubernetes with any cloud. This will ensure cloud providers | ||
can develop their features independantly from the core Kubernetes release cycles.** | ||
|
||
* TOC | ||
{:toc} | ||
|
||
## Background | ||
|
||
Before going into how to build your own cloud controller manager, some background on how it works under the hood is helpful. The cloud controller manager is code from `kube-controller-manager` utilizing Go interfaces to allow implementations from any cloud to be plugged in. Most of the scaffolding and generic controller implementations will be in core, but it will always exec out to the cloud interfaces it is provided, so long as the [cloud provider interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L29-L50) is satisifed. | ||
|
||
To dive a little deeper into implementation details, all cloud controller managers will import packages from Kubernetes core, the only difference being each project will register their own cloud providers by calling [cloudprovider.RegisterCloudProvier](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/plugins.go#L42-L52) where a global variable of available cloud providers is updated. | ||
|
||
## Developing | ||
|
||
### Out of Tree | ||
|
||
To build an out-of-tree cloud-controller-manager for your cloud, follow these steps: | ||
|
||
1. Create a go package with an implementation that satisfies [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go). | ||
2. Use [main.go in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go) from Kubernestes core as a template for your main.go. As mentioned above, the only difference should be the cloud package that will be imported. | ||
3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [cloudprovider.RegisterCloudProvider](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/plugins.go#L42-L52). | ||
|
||
Using existing out-of-tree cloud providers as an example may be helpful. You can find the list [here](/docs/tasks/administer-cluster/running-cloud-controller.md#examples). | ||
|
||
### In Tree | ||
|
||
For in-tree cloud providers, you can run the in-tree cloud controller manager as a [Daemonset](/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml) in your cluster. See the [running cloud controller manager docs](/docs/tasks/administer-cluster/running-cloud-controller.md) for more details. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,38 +1,93 @@ | ||
--- | ||
approvers: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You can add myself and @wlan0 here as well There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||
- luxas | ||
- thockin | ||
title: Build and Run cloud-controller-manager | ||
- wlan0 | ||
title: Kubernetes Cloud Controller Manager | ||
--- | ||
|
||
Kubernetes version 1.6 contains a new binary called as `cloud-controller-manager`. `cloud-controller-manager` is a daemon that embeds cloud-specific control loops in Kubernetes. These cloud-specific control loops were originally in the kube-controller-manager. However, cloud providers move at a different pace and schedule compared to the Kubernetes project, and abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud provider vendors to evolve independently from the core Kubernetes code. | ||
|
||
The `cloud-controller-manager` can be linked to any cloud provider that satisifies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go). | ||
In future Kubernetes releases, cloud vendors should link code that satisfies the above interface to the `cloud-controller-manager` project and compile `cloud-controller-manager` for their own clouds. Cloud providers would also be responsible for maintaining and evolving their code. | ||
**Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. This will ensure cloud providers can develop their features independantly from the core Kubernetes release cycles.** | ||
|
||
* TOC | ||
{:toc} | ||
|
||
### Building cloud-controller-manager for your cloud | ||
## Cloud Controller Manager | ||
|
||
To build cloud-controller-manager for your cloud, follow these steps: | ||
Kubernetes v1.6 contains a new binary called `cloud-controller-manager`. `cloud-controller-manager` is a daemon that embeds cloud-specific control loops. These cloud-specific control loops were originally in the `kube-controller-manager`. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud vendors to evolve independently from the core Kubernetes code. | ||
|
||
1. Write a cloudprovider that satisfies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go). | ||
2. Link the cloudprovider to cloud-controller-manager. | ||
The `cloud-controller-manager` can be linked to any cloud provider that satisifies [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go). For backwards compatibility, the [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager) provided in the core Kubernetes project uses the same cloud libraries as `kube-controller-manager`. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core. In future Kubernetes releases, all cloud controller managers will be developed outside of the core Kubernetes project managed by sig leads or cloud vendors. | ||
|
||
The methods in [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go) are self-explanatory. All of the | ||
[existing providers](https://git.k8s.io/kubernetes/pkg/cloudprovider/providers) satisfy this interface. If your cloud is already a part | ||
of the existing providers, you do not need to write a new provider; you can proceed directly with linking your cloud provider to the `cloud-controller-manager`. | ||
## Administration | ||
|
||
Once your code is ready, you must import that code into `cloud-controller-manager`. See the [rancher cloud sample](https://github.com/rancher/rancher-cloud-controller-manager) for a reference example. The import step in the sample is the only step required to link your cloud provider to the `cloud-controller-manager`. | ||
### Requirements | ||
|
||
### Running cloud-controller-manager | ||
Every cloud has their own set of requirements for running their own cloud provider integration, it should not be too different from the requirements when running `kube-controller-manager`. As a general rule of thumb you'll need: | ||
|
||
To run `cloud-controller-manager`, add it to your existing Kubernetes cluster as a Master component. All other master components except `kube-controller-manager` can be run without any changes. | ||
* cloud authentication/authorization: your cloud may require a token or IAM rules to allow access to their APIs | ||
* kubernetes authentication/authorization: cloud-controller-manager may need RBAC rules set to speak to the kubernetes apiserver | ||
* high availabilty: like kube-controller-manager, you may want a high available setup for cloud controller manager using leader election (on by default). | ||
|
||
The `kube-controller-manager` should not run any cloud-specific controllers, since the `cloud-controller-manager` takes over this responsibility. To prevent the `kube-controller-manager` from running cloud-specific controllers, you must set the `--cloud-provider` flag in `kube-controller-manager` to `external`. | ||
### Running cloud-controller-manager | ||
|
||
The `kube-apiserver` should not run the Persistent Volume Label admission controller either since the `cloud-controller-manager` takes over labeling persistent volumes. To prevent the Persistent Volume Label admission plugin from running, make sure the `kube-apiserver` has a `--admission-control` flag with a value that does not include `PersistentVolumeLabel`. | ||
Successfully running cloud-controller-manager requires some changes to your cluster configuration. | ||
|
||
For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConifguration needs to be added to the system. Follow [these instructions](/docs/admin/extensible-admission-controllers.md#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration: | ||
* `kube-apiserver` and `kube-controller-manager` MUST NOT specify the `--cloud-provider` flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed. | ||
* `kubelet` must run with `--cloud-provider=external`. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work. | ||
* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller since the cloud controller manager takes over labeling persistent volumes. To prevent the PersistentVolumeLabel admission plugin from running, make sure the `kube-apiserver` has a `--admission-control` flag with a value that does not include `PersistentVolumeLabel`. | ||
* For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConifguration needs to be added to the system. Follow [these instructions](/docs/admin/extensible-admission-controllers.md#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration: | ||
|
||
{% include code.html language="yaml" file="persistent-volume-label-initializer-config.yaml" ghlink="/docs/tasks/administer-cluster/persistent-volume-label-initializer-config.yaml" %} | ||
|
||
Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways: | ||
|
||
* kubelets specifying `--cloud-provider=external` will add a taint `node.cloudprovider.kubernetes.io/uninitialized` with an effect `NoSchedule` during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unscheduable. The taint is important since the scheduler may require cloud specific information about nodes such as it's region or type (high cpu, gpu, high memory, spot instance, etc). | ||
* cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retreive node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster. | ||
|
||
|
||
As of v1.8, cloud controller manager can implement: | ||
|
||
* node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud. | ||
* service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer. | ||
* route controller - responsible for setting up network routes on your cloud | ||
* [PersistentVolumeLabel Admission Controller](/docs/admin/admission-controllers#persistentvolumelabel) - responsible for labeling persistent volumes on your cloud - ensure that the persistent volume label admission plugin is not enabled on your kube-apiserver. | ||
* any other features you would like to implement if you are running an out-of-tree provider. | ||
|
||
|
||
## Examples | ||
|
||
If you are using a cloud that is currently supported in Kubernetes core and would like to adopt cloud controller manager, see the [cloud controller manager in kubernetes core](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager). | ||
|
||
For cloud controller managers not in Kubernetes core, you can find the respective projects in repos maintained by cloud vendors or sig leads. | ||
|
||
* [DigitalOcean](https://github.com/digitalocean/digitalocean-cloud-controller-manager) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. as @jhorwit2 said, you have two identical lists here, should be one... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. fixed |
||
* [keepalived](https://github.com/munnerz/keepalived-cloud-provider) | ||
* [Rancher](https://github.com/rancher/rancher-cloud-controller-manager) | ||
|
||
For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a Daemonset in your cluster, use the following as a guideline: | ||
|
||
{% include code.html language="yaml" file="cloud-controller-manager-daemonset-example.yaml" ghlink="/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml" %} | ||
|
||
|
||
## Limitations | ||
|
||
Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for production workloads. | ||
|
||
### Support for Volumes | ||
|
||
Cloud controller manager does not implement any of the volume controllers found in `kube-controller-manager` as the volume integrations also require coordination with kubelets. As we evolve CSI (container storage interface) and add stronger support for flex volume plugins, necessary support will be added to cloud controller manager so that clouds can fully integrate with volumes. Learn more about out-of-tree CSI volume plugins [here](https://github.com/kubernetes/features/issues/178). | ||
|
||
### Scalability | ||
|
||
In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retreive node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very larger clusters, you should consider possible bottle necks such as resource requirements and API rate limiting. | ||
|
||
### Chicken and Egg | ||
|
||
The goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unforunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete. | ||
|
||
A good example of this is the TLS bootstrapping feature in the Kubelet. Currently, TLS bootstraping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialzed in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver. | ||
|
||
As this initiative evolves, changes will be made to address these issues in upcoming releases. | ||
|
||
## Developing your own Cloud Controller Manager | ||
|
||
To build and develop your own cloud controller manager, read the [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager.md) doc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add BIG WARNING that this is DISCOURAGED or just link to a
system:cloud-controller-manager
ClusterRole and bind to that, and say to the user: fill in necessary verbs/resources hereThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm going to add an actual ClusterRole in this example, but I need to hash out what verbs/resources it actually needs access to. I'm going to do some testing and find out, mind if I updated this in another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what API calls are cloud controllers responsible for making?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC:
^ there could be more, I want to test it to be sure
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tested it but I think this is what the CCM needs currently. (was planning to next week)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! This will be a good starting point for my tests too, thanks @jhorwit2!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also use @liggitt's awesome new project to generate it :) https://github.com/liggitt/audit2rbac