Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Makes controller run in-cluster #1

Merged
merged 3 commits into from
Jun 10, 2020

Conversation

somtochiama
Copy link

No description provided.

@@ -12,20 +14,20 @@ spec:
listKind: KubeProxyList
plural: kubeproxies
singular: kubeproxy
scope: ""
scope: Namespaced

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a notable change -- I think it does make sense.
The Namespace of the CustomResource can be used to patch the manifests.
Most clusters will only have 1 instance of this CR, but containing it within a namespace like kube-system could make RBAC and ownership more clear.
You could manage kube-proxy daemonsets for different Node Groups in separate Namespaces.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is inline with the existing operators (eg coredns). I think I generated this CRD incorrectly by not picking Namespaced from the onset.

Really great point about different node groups.

@@ -22,13 +22,20 @@ spec:
labels:
control-plane: controller-manager
spec:
hostNetwork: true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a reasonable default 👍
users can override for esoteric deploys

Comment on lines 34 to 38
env:
- name: KUBERNETES_SERVICE_HOST
value: "172.17.0.2"
- name: KUBERNETES_SERVICE_PORT
value: "6443"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these keys should go into an apiserver-endpoint.patch.yaml.
You can then include that patch in the kustomization.yaml

Comment on lines 47 to 53
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same story here -- I think these should be separated into tolerate-controlplane.patch.yaml
It's actually possible to create a separate variant for this kind of install, but starting with separate patches is a good step.

Comment on lines 14 to 34
- apiGroups: [""]
resources: ["*"]
verbs: ["list", "get", "watch"]
- apiGroups: [""]
resources: ["events", "serviceaccounts"]
verbs: ["create", "patch", "update"]
- apiGroups: ["apps", "extensions"]
resources: ["daemonsets"]
verbs: ["get", "watch", "list", "create", "patch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings"]
verbs: ["get", "watch", "list", "create"]
- apiGroups: ["app.k8s.io"]
resources: ["applications"]
verbs: ["get", "watch", "list", "create", "patch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["create", "patch", "update"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These RBAC additions look good.
I think they should be moved to the role.yaml which is the primary role bound to the operator's ServiceAccount.

docker inspect kind-control-plane-1 | grep IPAddress
```

Replace it in the `manager.yaml`
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I run make manifests will this get overwritten? If so, it may make more sense to have the user edit the patch (patches/apiserver_endpoint.patch.yaml)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. That should change

Copy link
Owner

@johnsonj johnsonj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks fantastic, thank you for picking this up @somtochiama!

I think this is far enough along that we should just do any follow ups in the kubernets-sigs/cluster-addons.

k8s.io/kubeadm/kinder v0.0.0-20191014153037-d541f020334c // indirect
sigs.k8s.io/controller-runtime v0.2.2
sigs.k8s.io/kubebuilder-declarative-pattern v0.0.0-20190926123507-e845b6c6f25a
sigs.k8s.io/cluster-addons/dashboard v0.0.0-20200515184536-657eb5be7e85
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why the dashboard operator and kind are getting pulled into this file? Maybe a weird local cache issue?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's weird. I would look into it

@johnsonj johnsonj merged commit 4e1497c into johnsonj:kubeproxy Jun 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants