New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC of hooks #2260
PoC of hooks #2260
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it, but yes it gives the user much capability to monkey stuff. I would ask a lot of logging on both kops and nodeup side.
Again awesome, bit it is playing with Fire.
"/usr/bin/docker", | ||
"run", | ||
"-v", "/:/rootfs/", | ||
"-v", "/var/run/dbus:/var/run/dbus", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we abstract this somehow? This has scary perms for me ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to give the docker hook "root" permissions, at least by default. Because we're probably also going to have a "script" hook, which will have more permissions.
@@ -239,6 +239,20 @@ type ClusterSpec struct { | |||
|
|||
// Tags for AWS instance groups | |||
CloudLabels map[string]string `json:"cloudLabels,omitempty"` | |||
|
|||
// Hooks for custom actions e.g. on first installation | |||
Hooks []HookSpec `json:"hooks,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name does not sound like system scripts, etc. Any other ideas? Run or system in the name? RunHook, RunOnSystemHook?
@justinsb Loving the idea of hooks, and I think it's the right way to move forward with the pre-install script idea. I'm curious as to what you think about having the spec inside the instance group. I no longer require a startup script, but when I did, one of the things I was doing on startup was adding flags for RBAC to the kube-apiserver manifest. I imagine there would be other similar situations where you would want to do things on a master or node only. But I feel it could get a bit messy going down this path. If you needed to do the same thing in every instance group, you would need to add that spec to each instance group, and then maintain it. Changing things would be a pain. A solution could be to have a hook spec at the cluster level that runs for all instance groups, along side the IG-level spec. But then you have to potentially worry about the order of execution at the same hook points. With these in mind it may just be better to have hooks at a cluster-wide level only, and let users find work arounds if they need something on a specific instance group. What are your thoughts? |
@austinmoore- thanks for considering it - which I know is particularly tough when you've invested efforts in a different PR. I think for adding flags we'd ideally map those in our "componentconfig" in the manifests. But I certainly agree that we will want different hooks on different instance groups, and I think you're spot-on in your description of the tradeoffs. I started with it at the cluster level, but I think we'll end up with both very soon TBH. The ordering is definitely interesting and I hadn't considered that. I would hope that "cluster then instancegroups" would actually work out in practice - I'm not sure if there's anything you have in mind where it's particularly problematic. I certainly don't mind adding it to the IG also right now; it won't actually add much complexity to the code, though it will add complexity to the mental model! I'm trying to experiment with what we can & can't easily run in a container. GPU installation works (surprisingly) well https://github.com/kubernetes/kops/pull/2298/files ! I know we'll end up with a script action, but I'd love to have canned docker containers for the common things first :-) |
Can I use this pre-pull images on my nodes? |
+1 We are looking at using kops and need something like this before we can start using it as we can't configure authn webhooks without pulling a config file down on the masters. I'm currently hacking the terraform outputted user data to append a script to pull down the webhook yaml file. It works but I want to move away from doing this before we run any important workloads on it as I can see it ending in outages when kops overwrites it on update and our auth config disappears. |
@ahawkins Do you mean to pre-pull images? I think so, because you have access to the root filesystem, but I can verify if so. In any case we'll likely add some more mechanisms - I think pre-pulling images is a good one. But... I'm also wondering if we should have an image puller. We currently pre-pull some images (particularly for CI), but images can also be GCed by k8s, and k8s GCs an image that is only available via |
@justinsb My use case is lowering initial time to deploy applications on new node. We have ~20 different applications which comes out to something like ~10GB of Docker images. They also share common base images though (ruby, java, go, python, node etc). I'd like to pull those images at node boot time (before joining the cluster) to minimize load on the first deploy. |
which release will include this feature? |
@toebbel it's in the latest release. ...
hooks:
- execContainer:
command:
- sh
- -c
- echo 127.0.0.1 foo.bar >> /rootfs/etc/hosts
image: busybox
... |
This change is