-
Notifications
You must be signed in to change notification settings - Fork 136
Consider using Jsonnet (and maybe Kubecfg) as a foundation for the rules. #25
Comments
@dlorenc FYI FWIW, I intentionally decoupled the functionality that this currently performs from any sort of templating, you can see some of my rationale here. I definitely appreciate it's usefulness, which is why my original prototype had a form of templating :) That said, I certainly anticipate folks feeding the output of The resolver already supports multi-document yaml files, see here, I believe the only place this should be problematic today is At one point I was validating Beyond composing templating with these rules, I also wanted to enable folks to build stuff that potentially wraps these rules to expose higher-level functionality.
Bottom line, I want to enable all of the things you are talking about and more, and I hope this is a reasonable foundation for doing that, but it'll take some iteration.
You mean to instantiate things or to store a resolved template?
Is it in a public repo you can link me to? |
@hsyed Does what I say make sense? I would love to add some samples showing this stuff working with jsonnet, but having never touched it and having no good jsonnet + K8s samples (issue) in If you have some good examples we could turn into samples for K8s, I'd certainly appreciate the pointer. @mmikulicic Also has bazelbuild/rules_jsonnet#28 that seem like they could be fed into these rules, and it uses |
@mattmoor We could setups a slack call next week and I can show you what I have done so far. What you say makes sense. I can see the power of lower pevel primitives that keep doors open. Granularly modelled rules are very good for setting up smoke and unit tests. I hadn't seen that you had a jsonnet issue allready open. What is great about jsonnet is that it can just be treated like simple json, the templating features need not be used- at the same time it has the modelling power that rivals ci systems like saltstack. It is certainly more flexible at creating reusable manifeat than helm as it stands currently. So when I say lets use Jsonnet from the ground up I mean it from the low level perspective it doesn't close any doors. this is a good tutorial on showing how jsonnet mixins work. The kubecfg guys are working on conventions ontop of jsonnet for modelling arbitrary deployment environments, their issue list is a good place to pick up on what they are working towards. The environments are just conventions of laying out jsonnet files and attaching envornment configuration such as environment x uses kubecfg context x. |
A bit of a brain dump I wrote last night on the scope I want bazel to cover for k8s deployments. Multiple EnvironmentsWhen I created this issue I was hoping for the rules_k8s model of the deployments to be rich enough so that multiple CI environments can be generated (minikube -> staging -> dev -> qa) where each gets progressively more production faithfull and that the same declarations could also be used to generate production manifests. Model Complex DeploymentsIn terms of scope, it would be great if we could model a production faithful HA Postgres or Kafka in bazel in. These deployments tend to be very complex to model. If the rules aren't flexible enough to enable a lot of reuse helm or another tool would still have to be used to setup parts of an environment. Mass manifest installation mechanismAnother aspect is the ability to create many different k8s states/fixtures for k8s aware microservices. Our codebase is evolving toward an architecture that will use kubernetes as a hub for deploying schemas and plugins (js) in a multi tenant enterprise system. In this case it isnt just installing manifests into kubernetes to get services up but also to set up the business logic in the applications. |
Due to time constraints I am considering writing helm rules for bazel for the time being as we have already have a lot of charts. Helm as a CI toolWe currently have a few helm charts which are being used in a CI capacity. Helm works quite well as a CI tool if you it in a specific way. What is good about helm is that it models the upgrade path quite well and will restart components if the charts are modelled correctly. helm vs kubecfgKubecfg (at the moment) only installs manifests and has no logic for rolling upgrades. helm vs kubectlkubectl requires a rolling upgrade command to be issued and I don't think it discriminates -- I suspect the rolling upgrades are applied to every upgradeable component in a set of manifests -- so if we had a blob of manifests for an entire environment everything would be restarted. low level modelling in rules_k8sOn a side note, If kubectl was used in rules_k8s the granular modelling approach would help as each component would be hermetically tied to the manifests and docker image targets it depends on. An "environment" would be a collection of rules_k8s as a CI workflow tool.Consider "Model Complex Deployments" in my previous entry... helm charts already exist for complex deployments. In rules_docker I ask for |
@hsyed sorry it took so long to get to this. I don't suppose you'll be coming to the Bazel conference in Sunnyvale in early November? I'd love to buy you coffee/beer and chat f2f :)
One of the things I've been playing with recently is getting this up and running for Prow from the
I also resolved the
So my hope is that You might be able to get away with single rule definitions for multiple environment using Bazel's |
Did you two get to talk back in November? It would great to be able to arbitrarily specialize resources at deploy time without needing the entire build system (i.e. |
Minor clarification to above:
kubecfg has full support for rolling upgrades (the same as what you see from other k8s tools), since this is done by k8s itself server-side. Eg: you can just update from one Deployment version to the next with kubecfg, and k8s will manage a smooth and safe rolling transition between the two. The explicit A notable difference in this space is that helm adds the possibility for an additional "update" job that gets run between chart versions - which can be used for things like database schema upgrades. This pattern is generally frowned upon (and thus not supported by k8s out of the box) since it is a risky atomic imperative step and can complicate downgrades, but it is something people are used to using in pre-k8s architectures. |
I've been working on a repo that integrates jsonnet and rules_k8s In https://github.com/borg286/better_minig/tree/master/java/com/examples/grpc_redis I have an example where the server depends on a redis setup as its backend for the location database. When the :myns-deep.create target is ran it also runs the server and redis. the :myns-shallow.create target only includes the k8s objects that this service needs (ConfigMap, Prom rules, Service...) not external dependencies like a prometheus server. The challenge I have now is that I'd like to have the build dependency reflected in the order that the services are turned on. I believe that helm offers this kind of dependency. I don't know if kubecfg does. If we had some way to express our dependencies in bazel targets that would enable me to do deep dependency turnups consistently as though a human were rolling out services in a sane manner. |
FWIW, I briefly looked at helm early on and (at the time) it was its inability to operate without Tiller that shut down my investigation (template instantiation couldn't be hermetic). Tillerless Helm is now a (very popular) thing, so it's worth revisiting. Perhaps rather than forking the repo you could upstream the support via helm rules here? |
I think that sounds great. This is a side question but will inform me on this dev work: Is it considered wrong to reference the helm binary by url path and have bazel extract it compared with having bazel compile the whole thing. Seems helm is mostly just a templating tool. In mkmik's repo he seems to have built kubecfg from scratch. Seems overboard to me. Regarding Tillerless Helm, I thought that the tiller would be responsible for watching a deployment go out. If we pursue a tillerless solution then the only benefit we get is letting users specify overrides in yaml files and merging that with their charts to produce composite k8s yaml files with no dependency assurance. |
Someone at bazel conf had helm integration. But yah rules_helm seems reasonable. That or make this were you can make kubectl deploy a toolchain ... not sure if I am describing it correctly. I know some folks want helm and some will not. |
I saw that this repo was trying to make kubectl an optional tool to have on the host. Without it bazel would build it from scratch. Are you proposing a flag to the dependency function in one's WORKSPACE file where the user could opt for helm binary and the k8s_object rule would produce .create and .apply targets that have helm install and helm upgrade under the hood? |
I meant something like a helm subdirectory that potentially swaps out aspects of the underlying implementation, but may have a common core for resolving the yamls and such. One other thing about Helm that turned me off was that the Go templating (when used arbitrarily) made it so that you couldn't read/modify/write it in a structured way to do things like resolution. Again, if that can now be done hermetically, a major obstacle is gone. |
A significant benefit of incorporating helm would be access to its plethora of charts. Meaning that one could theoretically define a helm_object target, point it at some values.yaml file, then depend on that in a subsequent k8s_objects target, then this repo would do some magic under the hood and pipe up the composite yaml files into nested chart directories that helm understands. This is probably getting off on a tangent, but I wanted to point out a possible benefit of this inheritance. |
Mattmoor, can you help me understand the problems with the hermetic aspect of the go templating? |
To address the title of the bug, I feel that jsonnet is a superior language for inheriting and modifying kubernetes objects that are fed into rules_k8s. Unlike gcl and piccolo it is fairly well structured, has pretty good support for creating libraries and piping values in from BUILD files and targets. Most of the examples I could find elsewhere did imports in a relative way (../other_folder/some.jsonnet) while I found |
(this is from almost two years ago, so foggy, but I'll try to summarize what I recall) Go templating wasn't the problem, it was that you needed Tiller to instantiate it (at the time), so build actions couldn't hemertically produce the yamls (to feed to another rule as an input). |
Responding to two out of the many points above:
This isn't perfect, but it is good enough for many cases (and does not suffer from some of the issues helm encounters with its hard-coded list of kinds). For example, it will create the certmanager Certificate CRD declaration before using it, and the mysql Namespace, Service and ConfigMaps before creating the wordpress Deployment. If you want something else, then you need to force the order externally by invoking Fwiw,
As pointed out in other comments above, you lose the installation tracking and ordering without tiller. (*) |
I've read over all the comments and I'll try to summarize everything Request 1: Make k8s_object capable of pulling in jsonnet so it can do the templating/rendering. Request 2: Make a k8s_stack/k8s_environment like rule that would intelligently handle individual k8s_object targets. Currently :bla.describe can't handle a composite yaml file. Request 3: Add a deps field to either k8s_objects or k8s_stack where the user requested command is first executed on dependencies before being executed on this target. ie. :my-stack with a dependency on //prod/redis:prod would end up with my-stack.upgrade first calling //prod/redis.prod.upgrade and then seeing its own objects are k8s_object targets and calling .apply on each of them in the order listed in the rule. I don't like the way kubectl pushes, what about XProposal 1: Make helm an optional toolchain target that you opt into somehow. Proposal 2: Make a helm_rules repo that has a leakier abstraction layer (values.yaml, subcharts...) Proposal 3: Swap out kubectl with kubecfg My feelings so far. rules_jsonnet has advanced bazel rules for handling jsonnet while kubectl doesn't. Simply wrapping it doesn't feel much different than simply piping the jsonnet_to_json into a set of k8s_object targets. However it does remove the need to define a k8s_object for every new resource. The drawback of doing that is that you can't act on individual resources (ie. updating a PromRules object w/o the overhead of parsing/checking the entire stack). Allowing one to route some resources into explicit build targets that get their own .create, .apply... as well as allowing composite sets of files to be updated feels like the right API. Why can't we do a .apply to a composite file? We should define an API for general k8s_stack. Its dependencies would have their appropriate tool take over and take the relevant action appropiate for its tool: create, upgrade, delete, describe, diff. In the end the action items are to
|
After some further thought, I shouldn't rely on the ordering of the dependency list for dictating the ordering of doing pushes. Instead I should either explicitly have the user provide some ordering or do it in some other way. In the end I was shooting for something like a poor-man's workflow engine. The best thing to do now is simply to have a deps list and each dependency is asked to create/update.. and then the objects in that stack are then asked to create/update... I've been thinking about pulling in ksonnet and doing the jsonnet_to_json inside of rules_k8s in some new rule and I'm actually liking it more now. |
As primitives the
k8s_defaults
andk8_object
are quite fine grained. I envisage the currentk8s_defaults
,k8s_object
to evolve to provide some templating parameterised by make variables, json files, statically provided dicts -- this will invariably be limiting.Here are some suggestions:
k8s_defaults
more than just defaults for a single type of resource, or as a holder for some configuration parameters. Ak8s_defaults
that holds jsonnet file(s) (libsonnet) can model a simple dict of params -- at it does now -- all the way to modelling a particular deployment environment.k8s_object
to represent an arbitrary set of k8s resources instead of just a single resource. A single jsonnet file renders to multiple Json or Yaml files. A real world micro service is going to have a service, a controller, ingress, secrets, configmap. These should likely be deployed in one step.I working on wiring up kubecfg into our codebase. I have run rules for my
k8s_object
equivalent that map to the kubecfgs (apply, delete, validate, show) commands. I can provide more detail if interested.The text was updated successfully, but these errors were encountered: