New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoC of hooks #2260

Merged
merged 1 commit into from Apr 19, 2017

Conversation

Projects
None yet
8 participants
@justinsb
Copy link
Member

justinsb commented Apr 1, 2017

This change is Reviewable

@chrislovecnm
Copy link
Member

chrislovecnm left a comment

Love it, but yes it gives the user much capability to monkey stuff. I would ask a lot of logging on both kops and nodeup side.

Again awesome, bit it is playing with Fire.

"/usr/bin/docker",
"run",
"-v", "/:/rootfs/",
"-v", "/var/run/dbus:/var/run/dbus",

This comment has been minimized.

@chrislovecnm

chrislovecnm Apr 1, 2017

Member

Can we abstract this somehow? This has scary perms for me ;)

This comment has been minimized.

@justinsb

justinsb Apr 6, 2017

Author Member

We want to give the docker hook "root" permissions, at least by default. Because we're probably also going to have a "script" hook, which will have more permissions.

@@ -239,6 +239,20 @@ type ClusterSpec struct {

// Tags for AWS instance groups
CloudLabels map[string]string `json:"cloudLabels,omitempty"`

// Hooks for custom actions e.g. on first installation
Hooks []HookSpec `json:"hooks,omitempty"`

This comment has been minimized.

@chrislovecnm

chrislovecnm Apr 1, 2017

Member

The name does not sound like system scripts, etc. Any other ideas? Run or system in the name? RunHook, RunOnSystemHook?

@austinmoore-

This comment has been minimized.

Copy link
Contributor

austinmoore- commented Apr 5, 2017

@justinsb Loving the idea of hooks, and I think it's the right way to move forward with the pre-install script idea. I'm curious as to what you think about having the spec inside the instance group.

I no longer require a startup script, but when I did, one of the things I was doing on startup was adding flags for RBAC to the kube-apiserver manifest. I imagine there would be other similar situations where you would want to do things on a master or node only.

But I feel it could get a bit messy going down this path. If you needed to do the same thing in every instance group, you would need to add that spec to each instance group, and then maintain it. Changing things would be a pain. A solution could be to have a hook spec at the cluster level that runs for all instance groups, along side the IG-level spec. But then you have to potentially worry about the order of execution at the same hook points. With these in mind it may just be better to have hooks at a cluster-wide level only, and let users find work arounds if they need something on a specific instance group.

What are your thoughts?

@justinsb

This comment has been minimized.

Copy link
Member Author

justinsb commented Apr 6, 2017

@austinmoore- thanks for considering it - which I know is particularly tough when you've invested efforts in a different PR.

I think for adding flags we'd ideally map those in our "componentconfig" in the manifests. But I certainly agree that we will want different hooks on different instance groups, and I think you're spot-on in your description of the tradeoffs.

I started with it at the cluster level, but I think we'll end up with both very soon TBH.

The ordering is definitely interesting and I hadn't considered that. I would hope that "cluster then instancegroups" would actually work out in practice - I'm not sure if there's anything you have in mind where it's particularly problematic.

I certainly don't mind adding it to the IG also right now; it won't actually add much complexity to the code, though it will add complexity to the mental model!

I'm trying to experiment with what we can & can't easily run in a container. GPU installation works (surprisingly) well https://github.com/kubernetes/kops/pull/2298/files ! I know we'll end up with a script action, but I'd love to have canned docker containers for the common things first :-)

@ahawkins

This comment has been minimized.

Copy link
Contributor

ahawkins commented Apr 11, 2017

Can I use this pre-pull images on my nodes?

@carpenterm

This comment has been minimized.

Copy link

carpenterm commented Apr 13, 2017

+1

We are looking at using kops and need something like this before we can start using it as we can't configure authn webhooks without pulling a config file down on the masters. I'm currently hacking the terraform outputted user data to append a script to pull down the webhook yaml file. It works but I want to move away from doing this before we run any important workloads on it as I can see it ending in outages when kops overwrites it on update and our auth config disappears.

@justinsb

This comment has been minimized.

Copy link
Member Author

justinsb commented Apr 13, 2017

@ahawkins Do you mean to pre-pull images? I think so, because you have access to the root filesystem, but I can verify if so. In any case we'll likely add some more mechanisms - I think pre-pulling images is a good one.

But...

I'm also wondering if we should have an image puller. We currently pre-pull some images (particularly for CI), but images can also be GCed by k8s, and k8s GCs an image that is only available via docker load, we won't get it back. What is your use case for pre-pulling images?

@ahawkins

This comment has been minimized.

Copy link
Contributor

ahawkins commented Apr 13, 2017

@justinsb My use case is lowering initial time to deploy applications on new node. We have ~20 different applications which comes out to something like ~10GB of Docker images. They also share common base images though (ruby, java, go, python, node etc). I'd like to pull those images at node boot time (before joining the cluster) to minimize load on the first deploy.

@justinsb

This comment has been minimized.

Copy link
Member Author

justinsb commented Apr 19, 2017

@ahawkins I've validated we can have a hook that pulls images using this approach (#2381), though I haven't yet plugged it all together to validate.

@chrislovecnm chrislovecnm merged commit 6e81a8c into kubernetes:master Apr 19, 2017

3 checks passed

Jenkins Kubernetes AWS e2e Build succeeded.
Details
cla/linuxfoundation justinsb authorized
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@toebbel

This comment has been minimized.

Copy link

toebbel commented Apr 20, 2017

which release will include this feature?

@lsjostro

This comment has been minimized.

Copy link

lsjostro commented May 17, 2017

@toebbel it's in the latest release.
clusterSpec example:

...
 hooks:
  - execContainer:
      command:
      - sh
      - -c
      - echo 127.0.0.1 foo.bar >> /rootfs/etc/hosts
      image: busybox
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment