-
Notifications
You must be signed in to change notification settings - Fork 224
Provide a generic mechanism for flag customizations #106
Comments
Another item brought up is that some flags are going to be common (but cannot be assumed as defaults). Such as I'm wondering about the option of layering multiple configMaps, such that we could say something like:
This would then use the set of flags present in the For example, the
And the
And the resulting apiserver configMap would be:
Internally in the bootkube binary we could support a few common profiles - but letting other's self-provide a profile could just mean pointing to a particular directory which contains configMaps. /cc @dghubble @sym3tri @derekparker @pbx0 |
I think it would be nice to still provide a couple of the more commonly-used options as command-line flags to bootkube -- first and foremost, cloud-provider and things cloud-provider needs to work (e.g. taking into account things like kubernetes/kubernetes#26897). People understand flags and there aren't as many syntax worries as if I have to edit YAML or JSON. But yes, it will quickly get unwieldy if bootkube has to support every flag for every Kubernetes component ever and so I think it's entirely valid to draw a line and say "beyond this point you have to do X by creating a Y" (whether Y is a configMap, some sort of YAM or JSON manifest, or even just a plaintext file with the desired flags per component). |
Configuring manifest flags with another config map seems to add a decent amount of complexity for what is ultimately template rendering. On my side, we're already rendering lots of templates and generating TLS assets so I think we're moving to simply use |
A big reason I want to go this route is that upstream is moving toward all components being configured via configMaps. If we follow this pattern we stand to get a lot of functionality out-of-box (e.g. we can validate initial configuration, then push that configuration directly into the api as configMaps + our components will natively use those objects already). Rather than coming up with an intermediary config (where config complexity adds up quickly -- see kube-up, kube-aws, etc.), we just use the native end-state -- and provide early validation.
I think this will be a much more common route as people's cluster configuration becomes more standardized for their own environments / is committed after initial rendering. This ultimately is the preferred end-state as we're never going to be able to provide "default configuration" that is everything to everyone. However, I think this still lends really well to standardizing on configMaps + validation, because it is the standard way of expressing configuration of components -- so even if you don't use our tools to generate manifests at all - as long as you're using configMaps we can at least do some validation (rather than trying to parse out flags from an exec line). |
Now that #168 is implemented - the needed flexibility is essentially allowing an admin to modify anything about the rendered manifests. This proposal wouldn't really add much unless we wanted to further abstract higher-level concepts (which we likely do not). I still think we probably want to move to a versioned config file rather than flags - but opened: #565 to track that |
My initial concern is that I don’t want to plumb through every possible configurable through bootkube -- because it just becomes unnecessarily complex. However, people have differing needs so we need a sane way of allowing customization.
I was thinking of breaking the rendering steps into multiple pieces
init
command:
bootkube init
This would output configMap objects (kubernetes objects themselves) with all of the default values we want for each component. So api-server for example would have a configMap with key=value for each of the flags we default.
At this stage a deployer can just go in and modify each component configMap to the values they would like. This also will help us in the future because eventually all core components will be able to retrieve their config from an api object (componentConfig).
validate
command:
bootkube validate
This step could be run standalone against the configMap objects, and/or as part of the next
bootkube render
step.Here we will validate that options provided in the configMaps are compatible / recommended. For example:
Setting controller-manager
--service-cluster-ip-range=10.0.0.0/16
and kubelet--cluster-dns=172.14.0.1
is likely an error and we can warn.Or
Setting
--cloud-provider=aws
on the controller-manager, but not in kubelet & api-server are likely a misconfiguration.This way, flags and their inter-related dependencies can be modeled as a suite of tests -- rather than us trying to plumb this logic through bootkube / templating. And an end user has the same flexibility to modify any flags they want with no code changes necessary (and if validation issues arise - it is just a matter of adding additional tests).
render
command:
bootkube render
This step will be modified to get all configuration from the configMaps created above (and run the same validation from above. After validation, we do the same thing we were before: render all of the necessary component manifests -- but have them reference the configMaps for their flags.
To use the configMaps as flags directly (until it is supported upstream), we would need to figure out the best way to convert the key=values to
--key=value
for use in the command line of the manifest (e.g.command: /hyperkube apis-server $flags
)Or as another option, we just let the render step do the actual conversion from the configMap object to the manifest -- and we don't directly use the configMap objects via the api until supported upstream
The text was updated successfully, but these errors were encountered: