-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Cloud Controller Manager addon #3630
Conversation
/assign chrislovecnm |
- name: cloud-controller-manager | ||
# for in-tree providers we use gcr.io/google_containers/cloud-controller-manager | ||
# this can be replaced with any other image for out-of-tree providers | ||
image: gcr.io/google_containers/cloud-controller-manager:v{{ .KubernetesVersion }} # Reviewers: Will this work? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrislovecnm Is this the best way to set the version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should replace the whole image for out-of-tree providers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I a master wondering if we should have the container images in code. Pondering
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any verdict on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The verdict is we need an issue open for tracking this.
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be system:cloud-controller-manager
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
system:cloud-controller-manager
?
resources: | ||
- serviceaccounts | ||
verbs: | ||
- create |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is missing persistent volumes access which GCE/AWS depend on for the labeler controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I'll add it.
--- | ||
|
||
apiVersion: extensions/v1beta1 | ||
kind: Deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for kops, we can use daemonset on master cause we know that will be available. The original docs uses Deployment since it's not always guarenteed that is the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We just need one of these? Or do we need on on each master?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deployment with the same tags as dns-controller. That will give us one, only on a master. We can follow same pattern. DS are great, but we would get three with HA master.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CCM supports leader-election so DS can still work :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrislovecnm I spoke with @andrewsykim offline, and found that DS might be a better approach than Deployments.
DS makes it easy to scale the CCM as the number of master nodes scale. I'll update the implementation shortly
97f27b3
to
69eb97c
Compare
Added access for PVL Controller |
lgtm |
pkg/apis/kops/validation/legacy.go
Outdated
@@ -117,6 +117,11 @@ func ValidateCluster(c *kops.Cluster, strict bool) *field.Error { | |||
if strict && c.Spec.KubeControllerManager == nil { | |||
return field.Required(fieldSpec.Child("KubeControllerManager"), "KubeControllerManager not configured") | |||
} | |||
if kubernetesRelease.LT(semver.MustParse("1.7.0")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
if kubernetesRelease.LT(semver.MustParse("1.7.0")) && c.Spec.ExternalCloudControllerManager != nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please file a PR to move this out of legacy. validation.go is the new home.
pkg/apis/kops/validation/legacy.go
Outdated
@@ -279,23 +284,31 @@ func ValidateCluster(c *kops.Cluster, strict bool) *field.Error { | |||
} | |||
|
|||
if c.Spec.Kubelet != nil && (strict || c.Spec.Kubelet.CloudProvider != "") { | |||
if k8sCloudProvider != c.Spec.Kubelet.CloudProvider { | |||
return field.Invalid(fieldSpec.Child("Kubelet", "CloudProvider"), c.Spec.Kubelet.CloudProvider, "Did not match cluster CloudProvider") | |||
if c.Spec.Kubelet.CloudProvider != "external" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
if c.Spec.Kubelet.CloudProvider != "external" && k8sCloudProvider != c.Spec.Kubelet.CloudProvider
pkg/apis/kops/validation/legacy.go
Outdated
} | ||
} | ||
if c.Spec.MasterKubelet != nil && (strict || c.Spec.MasterKubelet.CloudProvider != "") { | ||
if k8sCloudProvider != c.Spec.MasterKubelet.CloudProvider { | ||
return field.Invalid(fieldSpec.Child("MasterKubelet", "CloudProvider"), c.Spec.MasterKubelet.CloudProvider, "Did not match cluster CloudProvider") | ||
if c.Spec.MasterKubelet.CloudProvider != "external" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
pkg/apis/kops/validation/legacy.go
Outdated
} | ||
} | ||
if c.Spec.KubeAPIServer != nil && (strict || c.Spec.KubeAPIServer.CloudProvider != "") { | ||
if k8sCloudProvider != c.Spec.KubeAPIServer.CloudProvider { | ||
return field.Invalid(fieldSpec.Child("KubeAPIServer", "CloudProvider"), c.Spec.KubeAPIServer.CloudProvider, "Did not match cluster CloudProvider") | ||
if c.Spec.KubeAPIServer.CloudProvider != "external" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
pkg/apis/kops/validation/legacy.go
Outdated
} | ||
} | ||
if c.Spec.KubeControllerManager != nil && (strict || c.Spec.KubeControllerManager.CloudProvider != "") { | ||
if k8sCloudProvider != c.Spec.KubeControllerManager.CloudProvider { | ||
return field.Invalid(fieldSpec.Child("KubeControllerManager", "CloudProvider"), c.Spec.KubeControllerManager.CloudProvider, "Did not match cluster CloudProvider") | ||
if c.Spec.KubeControllerManager.CloudProvider != "external" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
@@ -115,6 +115,10 @@ func (b *KubeAPIServerOptionsBuilder) BuildOptions(o interface{}) error { | |||
return fmt.Errorf("unknown cloudprovider %q", clusterSpec.CloudProvider) | |||
} | |||
|
|||
if clusterSpec.ExternalCloudControllerManager != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this here so we can discuss later. I don't think setting this flag is actually required for apiserver, leaving the cloudprovider blank is sufficient, same for KCM. No change required for this PR just a note for the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wlan0 please file an issue and make a note in the code
- --use-service-account-credentials | ||
# these flags will vary for every cloud provider | ||
- --allocate-node-cidrs=true | ||
- --configure-cloud-routes=true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If these vary for every cloud provider should we have them be configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is what I am struggling with, do we abstract this or make it so that we have a manifest for every cloud provider. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need a separate manifest for every cloud provider yet.
Edit:
I think it would suffice to just make them configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we addressed this? I think this is a blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to get issues filed. Looking good. My call is that we cleanup @andrewsykim's nit picks, and iterate.
pkg/apis/kops/validation/legacy.go
Outdated
@@ -117,6 +117,11 @@ func ValidateCluster(c *kops.Cluster, strict bool) *field.Error { | |||
if strict && c.Spec.KubeControllerManager == nil { | |||
return field.Required(fieldSpec.Child("KubeControllerManager"), "KubeControllerManager not configured") | |||
} | |||
if kubernetesRelease.LT(semver.MustParse("1.7.0")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please file a PR to move this out of legacy. validation.go is the new home.
@@ -115,6 +115,10 @@ func (b *KubeAPIServerOptionsBuilder) BuildOptions(o interface{}) error { | |||
return fmt.Errorf("unknown cloudprovider %q", clusterSpec.CloudProvider) | |||
} | |||
|
|||
if clusterSpec.ExternalCloudControllerManager != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wlan0 please file an issue and make a note in the code
- name: cloud-controller-manager | ||
# for in-tree providers we use gcr.io/google_containers/cloud-controller-manager | ||
# this can be replaced with any other image for out-of-tree providers | ||
image: gcr.io/google_containers/cloud-controller-manager:v{{ .KubernetesVersion }} # Reviewers: Will this work? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The verdict is we need an issue open for tracking this.
@andrewsykim @chrislovecnm I've fixed the conditional statement to address andrew's comments. PTAL. |
- "" | ||
resources: | ||
- persistentvolumes | ||
verbs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also needs secret list access since the service account credential flag is enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi @andrewsykim
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I updated the PR
@andrewsykim can I get a LGTM? |
@wlan0 my previous comment about the CCM flags not being configurable (https://github.com/kubernetes/kops/pull/3630/files#r145873454) wasn't addressed which I think is a requirement for this. |
@andrewsykim Sorry I didn't provide more context before asking for a review. I should've told you that I believe the configuration doesn't change when using the CCM for just GCE and AWS. In the context of present day kops, those are the only two existing clouds that kops supports. This PR was meant as a step to use CCM in kops to run existing functionality, so that we can run e2e tests asap. In order to address the larger concern, which your accurately pointed out in your comment, I have added another issue - #3669, which would deal with the intricacies of supporting vendor CCMs (such as DO) in the right manner and its configurability. |
@wlan0 we will need another e2e run for ccm as well ;) It will not happen automatically with addons. |
/lgtm @wlan0 thanks for clarifying! |
Yes, I'm aware |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewsykim, chrislovecnm The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. |
This adds the CCM addon for the Kubernetes cluster.
This is a follow-up PR to #3408.
cc @chrislovecnm @andrewsykim