Skip to content
This repository has been archived by the owner on Oct 2, 2023. It is now read-only.

Kubernetes rules #82

Closed
mattmoor opened this issue Jun 30, 2017 · 13 comments
Closed

Kubernetes rules #82

mattmoor opened this issue Jun 30, 2017 · 13 comments

Comments

@mattmoor
Copy link
Contributor

I am opening this issue to track discussions around what shape rules_k8s might take, and to enumerate the kinds of scenarios folks would like to see rules_k8s cover.

@mattmoor
Copy link
Contributor Author

@sebgoa @dlorenc @jmhodges FYI

@mattmoor
Copy link
Contributor Author

mattmoor commented Jul 3, 2017

@r2d4 @dlorenc FYI...

I've been playing around a bit here, and have a prototype of some rules for managing a k8s Deployment here. These are highly experimental.

I've been playing around with using these to deploy different environments from a single template in my bazel-grpc "Hello World" app. You can explore the README in mattmoor/rules_k8s, but what I'd expect to become the main workhorse for development would be:

bazel run :dev.replace

At least for this relatively simple app, if I make some edits the above command takes <10 seconds to have the new app running on my cluster (including C++ compilation, image packaging, image pushing, and kubectl replace). Clearly this will degrade with slower compilation, a bigger "app" layer, and/or more containers, but it is likely even faster for uncompiled languages whose "app" layer is essentially a handful of source files.

It is notable that there is very little Deployment specific logic in this, but a handful of commands take a kind. There is likely extensive opportunity for code re-use on other resource types.

Errata / TODO / Stuff I still don't like:

  1. Requires fully-qualified tags.
  2. Implicitly assumes kubectl auth (this is "state of the art" for Bazel; Hi @steren :))
  3. Implicitly assumes kubectl points at the intended K8s cluster (Probably the biggest outstanding model issue, IMO).
  4. Support running on minikube. Maybe we can make this a degenerate case of "which cluster?" (above).
  5. Consider support for a closed-form bundle, e.g. docker save with all referenced images + the instantiated yaml.

@mattmoor
Copy link
Contributor Author

mattmoor commented Jul 3, 2017

FYI I see 5. above as something to go hand-in-hand with a kubectl load or minikube load command.

@mattmoor
Copy link
Contributor Author

FWIW, I had a demo bug (I hadn't dropped :image.tar), so this is actually ~6 seconds :)

@niclaslockner
Copy link

I'd like to be able to build a container, push it to the docker daemon running in a minikube cluster, and then create a deployment using that container.

I've tried out the following rule from mattmoor/rules_k8s (with the address to the minikube VM hardcoded while testing):

k8s_object(
    name = "foo_deploy",
    cluster = "minikube",
    images = {
        "192.168.99.100:2376/test:latest": ":foo_build",
    },
    kind = "deployment",
    substitutions = {
        "name": "test",
        "replicas": "1",
        "port": "50053",
    },
    template = ":deployment.yaml.tpl",
)

Running eval $(minikube docker-env) followed by this rule gives me the following error:

Traceback (most recent call last):
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/__main__.py", line 133, in <module>
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/__main__.py", line 120, in main
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_session_.py", line 71, in __init__
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_http_.py", line 177, in __init__
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_http_.py", line 199, in _Ping
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/transport/transport_pool_.py", line 62, in request
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1659, in request
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1399, in _request
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1319, in _conn_request
  File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1092, in connect
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
usage: resolver.par [-h] [--override OVERRIDE]
resolver.par: error: argument --override: expected one argument

which I assume suggests that the DOCKER environment variables with the path to certs etc is currently not picked up.
Is there a way around this, or does google/containerregistry not support this?

Omitting the "images" in the rules, the deployment is successfully created in minikube (however it of course fails to fetch an image).

@mattmoor
Copy link
Contributor Author

@dlorenc @r2d4 FYI.

Indeed, I have not fully worked out the appropriate interaction with minikube in my prototype. Frankly, I am glad it worked as nearly as you describe! However, minikube is clearly one of the core scenarios I'd like this to support in a real rules_k8s.

For minikube, one of the options I'd considered previously was side-loading the containers into the Docker daemon (e.g. via docker load), but if we can achieve minikube support without a fork in the path this would certainly be preferable. Does minikube natively have a registry running on it that folks use as you describe?

The google/containerregistry library does not support the environment variables you describe currently, but it should probably be made to support them. Certainly if that's the biggest blocker for minikube support.

@dlorenc
Copy link
Contributor

dlorenc commented Aug 18, 2017

We don't have a docker registry by default, but it's possible to run one in minikube. You then need to make sure your pods all reference the in-cluster registry namespace for containers, though.

The end vars probably make the most sense.

@niclaslockner
Copy link

niclaslockner commented Aug 18, 2017

As far as I know, minikube only has a docker daemon running by default (see https://github.com/kubernetes/minikube/blob/master/docs/reusing_the_docker_daemon.md).

You can then use the minikube docker-env to get access to this daemon:

$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="...."
export DOCKER_API_VERSION="1.23"

Using docker load would be an option, but it would indeed be very neat if k8s_object/deploy was able to load it to a daemon directly (a local daemon or according to the environment variables above)

@mattmoor
Copy link
Contributor Author

@niclaslockner Sure, what I'd meant was that I'd want k8s_object to support docker load when targeting minikube and docker_push when targeting a proper cluster.

Considering we already have users specify the cluster name, we could have a separate / analogous configuration for minikube that triggers this path automatically, I was just hoping to avoid the dual logic internally.

What I'm thinking is something like:

k8s_defaults(
    name = "k8s_local_deploy",
    kind = "deployment",
    minikube = True,   # Use minikube CLI to determine the rest.
)

If juggling multiple minikubes is a thing (and they are distinguished by cluster name) then perhaps a better interface would be a parallel minikube_defaults rule with an identical signature:

minikube_defaults(
    name = "k8s_local_deploy",
    kind = "deployment",
    cluster = "my-local-cluster",
)

@mattmoor
Copy link
Contributor Author

I wanted to surface my current thinking around how these rules will manifest in the near term, and solicit feedback.

My current prototype conflates three things:

  1. templating (build w/ substitutions),
  2. image resolution (run w/o images), and
  3. publishing dependent images (run w/ images).

I think that the most immediate value of these rules is delivering on #3, and enabling tight iteration. I believe #2 can be viewed as a slight extension to this.

I want to punt on substitution in v1 for a few reasons:

  • I am not completely satisfied with the substitution syntax as I have it, and don't want to commit to it indefinitely.
  • It should be quite straightforward to add substitution to the rules later (but hard to remove).
  • You should be able to feed the template argument in from something like rules_jsonnet or rules_ksonnet, so inline substitution isn't required.
  • I think there is a lot of division in the K8s community about how to handle this, and so (similar to rules_[jk]sonnet, I think we may want this handled via an external rule.

So in the immediate term, I think the surface I will target (with each bullet as an increment of functionality) is:

  • build :foo: largely a no-op that returns the .yaml it is passed.
  • run :foo (w/o images): resolve tags to digests.
  • run :foo (w/ image): publish listed images, resolve the rest tag => digest.
  • run :foo.{bar}: for {bar} in create/replace/delete (available iff cluster="" is specified).
  • run :foo.describe
  • run foo.expose: and other ad hoc actions.

How important do folks think templating is? Does my logic here make sense? I'd appreciate any feedback here.

@dlorenc
Copy link
Contributor

dlorenc commented Aug 21, 2017

build :foo: largely a no-op that returns the .yaml it is passed.

Would this also build any images referenced in the yaml?

run :foo (w/o images): resolve tags to digests.
run :foo (w/ image): publish listed images, resolve the rest tag => digest.

I'm not sure I understand the difference here. Is this about whether :foo is an image, or if it references one?

run foo.expose: and other ad hoc actions.

Is the idea to completely wrap kubectl with these extra actions?

@mattmoor
Copy link
Contributor Author

@dlorenc I'm not sure that in the first increment I'll even expose the kwarg, but once it's there it would because they'd be runfiles of the executable version.

The difference is:

  • w/o images the tag => digest resolution is based on what's currently published.
  • w/ images the tag => digest resolution is an output of publishing images.

Technically, with multiple image references (and a partial images override) you could get a mix of behaviors.

We don't need to fully wrap kubectl, but it was convenient when paired with templating because the deployment name could/would vary. In this more static world, perhaps we stop at the basics.

Regarding templating, I think that if we head down that route (in a further increment), we should adhere to this accepted K8s design proposal.

@mattmoor
Copy link
Contributor Author

I have created the repo: https://github.com/bazelbuild/rules_k8s

I will start adding some of the elements of my prototype there as I break off pieces and clean them up. I have enabled issues on that repo, so let's discuss further topics there.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants