-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Makes controller run in-cluster #1
Conversation
@@ -12,20 +14,20 @@ spec: | |||
listKind: KubeProxyList | |||
plural: kubeproxies | |||
singular: kubeproxy | |||
scope: "" | |||
scope: Namespaced |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a notable change -- I think it does make sense.
The Namespace of the CustomResource can be used to patch the manifests.
Most clusters will only have 1 instance of this CR, but containing it within a namespace like kube-system
could make RBAC and ownership more clear.
You could manage kube-proxy daemonsets for different Node Groups in separate Namespaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is inline with the existing operators (eg coredns). I think I generated this CRD incorrectly by not picking Namespaced from the onset.
Really great point about different node groups.
@@ -22,13 +22,20 @@ spec: | |||
labels: | |||
control-plane: controller-manager | |||
spec: | |||
hostNetwork: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a reasonable default 👍
users can override for esoteric deploys
env: | ||
- name: KUBERNETES_SERVICE_HOST | ||
value: "172.17.0.2" | ||
- name: KUBERNETES_SERVICE_PORT | ||
value: "6443" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these keys should go into an apiserver-endpoint.patch.yaml
.
You can then include that patch in the kustomization.yaml
tolerations: | ||
- key: "node.kubernetes.io/not-ready" | ||
operator: "Exists" | ||
effect: "NoSchedule" | ||
- key: "node-role.kubernetes.io/master" | ||
operator: "Exists" | ||
effect: "NoSchedule" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same story here -- I think these should be separated into tolerate-controlplane.patch.yaml
It's actually possible to create a separate variant for this kind of install, but starting with separate patches is a good step.
- apiGroups: [""] | ||
resources: ["*"] | ||
verbs: ["list", "get", "watch"] | ||
- apiGroups: [""] | ||
resources: ["events", "serviceaccounts"] | ||
verbs: ["create", "patch", "update"] | ||
- apiGroups: ["apps", "extensions"] | ||
resources: ["daemonsets"] | ||
verbs: ["get", "watch", "list", "create", "patch"] | ||
- apiGroups: ["rbac.authorization.k8s.io"] | ||
resources: ["clusterrolebindings"] | ||
verbs: ["get", "watch", "list", "create"] | ||
- apiGroups: ["app.k8s.io"] | ||
resources: ["applications"] | ||
verbs: ["get", "watch", "list", "create", "patch"] | ||
- apiGroups: ["discovery.k8s.io"] | ||
resources: ["endpointslices"] | ||
verbs: ["list", "watch"] | ||
- apiGroups: ["events.k8s.io"] | ||
resources: ["events"] | ||
verbs: ["create", "patch", "update"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These RBAC additions look good.
I think they should be moved to the role.yaml
which is the primary role bound to the operator's ServiceAccount.
b45073d
to
b965cf3
Compare
docker inspect kind-control-plane-1 | grep IPAddress | ||
``` | ||
|
||
Replace it in the `manager.yaml` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I run make manifests
will this get overwritten? If so, it may make more sense to have the user edit the patch (patches/apiserver_endpoint.patch.yaml
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. That should change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks fantastic, thank you for picking this up @somtochiama!
I think this is far enough along that we should just do any follow ups in the kubernets-sigs/cluster-addons.
k8s.io/kubeadm/kinder v0.0.0-20191014153037-d541f020334c // indirect | ||
sigs.k8s.io/controller-runtime v0.2.2 | ||
sigs.k8s.io/kubebuilder-declarative-pattern v0.0.0-20190926123507-e845b6c6f25a | ||
sigs.k8s.io/cluster-addons/dashboard v0.0.0-20200515184536-657eb5be7e85 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why the dashboard operator and kind are getting pulled into this file? Maybe a weird local cache issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's weird. I would look into it
No description provided.